Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 267/788 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • power and modulo on the fly for big numbers

    - by user unknown
    I raise some basis b to the power p and take the modulo m of that. Let's assume b=55170 or 55172 and m=3043839241 (which happens to be the square of 55171). The linux-calculator bc gives the results (we need this for control): echo "p=5606;b=55171;m=b*b;((b-1)^p)%m;((b+1)^p)%m" | bc 2734550616 309288627 Now calculating 55170^5606 gives a somewhat large number, but since I have to do a modulooperation, I can circumvent the usage of BigInt, I thought, because of: (a*b) % c == ((a%c) * (b%c))%c i.e. (9*7) % 5 == ((9%5) * (7%5))%5 => 63 % 5 == (4 * 2) %5 => 3 == 8 % 5 ... and a^d = a^(b+c) = a^b * a^c, therefore I can divide b+c by 2, which gives, for even or odd ds d/2 and d-(d/2), so for 8^5 I can calculate 8^2 * 8^3. So my (defective) method, which always cut's off the divisor on the fly looks like that: def powMod (b: Long, pot: Int, mod: Long) : Long = { if (pot == 1) b % mod else { val pot2 = pot/2 val pm1 = powMod (b, pot, mod) val pm2 = powMod (b, pot-pot2, mod) (pm1 * pm2) % mod } } and feeded with some values, powMod (55170, 5606, 3043839241L) res2: Long = 1885539617 powMod (55172, 5606, 3043839241L) res4: Long = 309288627 As we can see, the second result is exactly the same as the one above, but the first one looks quiet different. I'm doing a lot of such calculations, and they seem to be accurate as long as they stay in the range of Int, but I can't see any error. Using a BigInt works as well, but is way too slow: def calc2 (n: Int, pri: Long) = { val p: BigInt = pri val p3 = p * p val p1 = (p-1).pow (n) % (p3) val p2 = (p+1).pow (n) % (p3) print ("p1: " + p1 + " p2: " + p2) } calc2 (5606, 55171) p1: 2734550616 p2: 309288627 (same result as with bc) Can somebody see the error in powMod?

    Read the article

  • MySQL error code:1329 in function

    - by Sharad Sharma
    DELIMITER // CREATE DEFINER=`root`@`localhost` FUNCTION `formatMovieNames`(lID int) RETURNS varchar(1000) CHARSET latin1 BEGIN DECLARE output varchar(1000); DECLARE done INT DEFAULT 0; declare a varchar(200); declare cur1 cursor for select fileName from swlp4_movie where movieID in (select movieID from lesson_movie_map where lessonID = lID order by lm_map_id); DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; open cur1; read_loop: loop fetch cur1 into a; if done = 1 then leave read_loop; end if; set output = concat(output, 'movie:[',a,']<br/>'); set output = substr(output, 0, length(@output)-5); end loop; close cur1; RETURN output; END// I have create this function and when I run it I do not get any output (select fileName from swlp4_movie where movieID in (select movieID from lesson_movie_map where lessonID = 24 order by lm_map_id)); brings correct result I am trying to get result as movie:['movieName']< br / movie:['movieName1'] and so on (had to change br tag, cause it was adding a line break) cant't figure out what I am doing wrong

    Read the article

  • Having trouble uploading a file

    - by neo skosana
    Hi I am having trouble uploading a file. First of all I have a class: class upload { private $name; private $document; public function __construct($nme,$doc) { $this->setName($nme); $this->setDocument($doc); } public function setName($nme) { $this->name = $nme; } public function setDocument($doc) { $this->document = $doc; } public function fileNotPdf() { /* Was the file a PDF? */ if ($this->document['type'] != "application/pdf") { return true; } else { return false; } } public function fileNotUploaded() { /* Make sure that the file was POSTed. */ if (!(is_uploaded_file($this->document['tmp_name']))) { return true; } else { return false; } } public function fileNotMoved($repositry) { /* move uploaded file to final destination. */ $result = move_uploaded_file($this->document['tmp_name'], "$repositry/$this->name.pdf"); if($result) { return false; } else { return true; } } } Now for my main page: $docName = $_POST['name']; $page = $_FILES['doc']; if($_POST['submit']) { /* Set a few constants */ $filerepository = "np"; $uploadObj = new upload($docName, $page); if($uploadObj->fileNotUploaded()) { promptUser("There was a problem uploading the file.",""); } elseif($uploadObj->fileNotPdf()) { promptUser("File must be in pdf format.",""); } elseif($uploadObj->fileNotMoved($filerepository)) { promptUser("File could not be uploaded to final destination.",""); } else { promptUser("File has been successfully uploaded.",""); } } The errors that I get: Warning: move_uploaded_file(about.pdf)[function.move-uploaded-file]: failed to open stream: No such file or directory in... Warning: move_uploaded_file()[function.move-uploaded-file]: Unable to move 'c:\xampp\tmp\php13.tmp' to 'about.pdf' in... File could not be uploaded to final destination.

    Read the article

  • how to do validations when composing object of a class in other class ?

    - by haansi
    Hi, I have an IPAddress class which has one property named ip and in its setter I am validating data coming and if data is invalid it throws an error. (Its code is as the following): private string ip; public string IP { get { return ip; } set { string PartsOfIP = value.Split('.'); if (PartsOfIP.Length == 4) { foreach (string part in PartsOfIP) { int a = 0; bool result = int.TryParse(part, out a); if (result != true) { throw new Exception("Invalid IP"); } else { ip = value; } } } else { throw new Exception("Invalid IP"); } } In User Class I want to compose an object of IPAddress class. I am doing validations for properties of User in User class and validations of Ip in IPAddress class. My question is how I will compose IPAddress object in UserClass and what will be syntax for this ? If I again mention get and set here with IPAddress object in User class will my earlier mentioned (in IPAddress class) getter and setter work ? plz advice me in details thanks

    Read the article

  • Populate JOIN into a list in one database query

    - by axio
    I am trying to get the records from the 'many' table of a one-to-many relationship and add them as a list to the relevant record from the 'one' table. I am also trying to do this in a single database request. Code derived from http://stackoverflow.com/questions/1580199/linq-to-sql-populate-join-result-into-a-list almost achieves the intended result, but makes one database request per entry in the 'one' table which is unacceptable. That failing code is here: var res = from variable in _dc.GetTable<VARIABLE>() select new { x = variable, y = variable.VARIABLE_VALUEs }; However if I do a similar query but loop through all the results, then only a single database request is made. This code achieves all goals: var res = from variable in _dc.GetTable<VARIABLE>() select variable; List<GDO.Variable> output = new List<GDO.Variable>(); foreach (var v2 in res) { List<GDO.VariableValue> values = new List<GDO.VariableValue>(); foreach (var vv in v2.VARIABLE_VALUEs) { values.Add(VariableValue.EntityToGDO(vv)); } output.Add(EntityToGDO(v2)); output[output.Count - 1].VariableValues = values; } However the latter code is ugly as hell, and it really feels like something that should be do-able in a single linq query. So, how can this be done in a single linq query that makes only a single database query? In both cases the table is set to preload using the following code: _dc = _db.CreateLinqDataContext(); var loadOptions = new DataLoadOptions(); loadOptions.LoadWith<VARIABLE>(v => v.VARIABLE_VALUEs); _dc.LoadOptions = loadOptions; I am using .NET 3.5, and the database back-end was generated using SqlMetal.

    Read the article

  • NVelocity (or Velocity) as a stand-alone formula evaluator

    - by dana
    I am using NVelocity in my application to generate html emails. My application has an event-driven model, where saving and/or updating of objects causes these emails to be sent out. Each event can trigger zero, one or multiple multiple emails. I want to be able to configure which emails get sent out at run-time without having to modify code. I was thinking I could leverage the NVelocity #if() directive to do this. Here is my idea... Step 1) Prior to email sending, the administrator must configure a formula for NVelocity to evaluate. For example: $User.FirstName == "Jack" Step 2) When an object is saved or created, build an NVelocity template in memory based on the input formula. For example: String formula = GetFormulaFromDB(); // $User.FirstName == "Jack" String templ = "#if( " + formula + ") 1 #else 0 #end"; Step 3) Execute the NVelocity engine in memory against the template. Check the results to see if we have to send the email: String result = VelocityMerge(templ); // utility function if( result.Trim() == "1" ) { SendEmail(); } I know this is not exactly what NVelocity was intended to do, but I think it just might work :) One of the benefits of doing things this way is that the same syntax can be used for the formula as is used inside the template. Does anybody have any words of caution or suggestions? Is there a way to execute the #if() directive without jumping through hoops like I have above? Is there a recommended way to validate the formula syntax ahead of time? Thanks.

    Read the article

  • C# XMLWriter + prevent "/" "<" "<" chars

    - by flurreh
    Hello, I have a xmlWriter and want to write String which containt chars of "/" "<" "" (which are part of the xml syntax and break the xml code). Here is my c# code: public Boolean Initialize(String path) { Boolean result = true; XmlWriterSettings settings = new XmlWriterSettings(); settings.CheckCharacters = true; settings.Encoding = Encoding.UTF8; settings.Indent = true; xmlWriter = XmlWriter.Create(path, settings); xmlWriter.WriteStartDocument(); xmlWriter.WriteStartElement("TestData"); isInitialized = true; return result; } public void WriteProducts(List<Product> productList) { if (isInitialized == true) { foreach (Product product in productList) { xmlWriter.WriteStartElement("Product"); xmlWriter.WriteElementString("Id", product.ProdId); xmlWriter.WriteElementString("Name", product.ProdName); xmlWriter.WriteElementString("GroupId", product.ProdGroup); xmlWriter.WriteElementString("Price", product.ProdPrice.ToString((Consts.FORMATTED_PRICE))); xmlWriter.WriteEndElement(); } } } public void Close() { xmlWriter.WriteEndElement(); xmlWriter.WriteEndDocument(); } The application runs without any errors, but if I look in the xml file, the xml is incomplete because the xmlwriter stops writing the product nodes when a product name contains one of the above mentioned characters. Is there a way to fix this problem?

    Read the article

  • Array of Arrays - writing to File problem

    - by iFloh
    Hi, and again my array of arrays ... I try to improve my app performance by buffering arrays on file for later reuse. I have an NSMutableArray that contains about 30 NSMutableArrays with NSNumber, NSDate and NSString Objects. I try to write the file using this call: bool result = [myArray writeToFile:[fileMethods getFullPath:[NSString stringWithFormat:@"iEts%@.arr", [aDate shortDateString]]] atomically:NO]; = result = FALSE. The Path method is: + (NSString *) getFullPath:(NSString *)forFileName { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:forFileName]; } and the aDate call returns a shortDateString with ddMMyy. The NSLog NSLog(@"%@", [fileMethods getFullPath:[NSString stringWithFormat:@"iEts%@.arr", [aDate shortDateString]]]); on the path generation returns: /Users/me/Library/Application Support/iPhone Simulator/User/Applications/86729620-EC1D-4C10-A799-0C638BB27933/Documents/iEts010510.arr FURTHER: It must have something to do with the Array of Arrays, since I also write 3 further simple arrays (containing NSStrings) that all succeed. The Array of Arrays gets generated using the addObject method Any ideas what could cause the trouble?

    Read the article

  • How do I efficiently locate key-value pairs in a multi-dimensional PHP array?

    - by Kyle Noland
    I have an array in PHP as a result of the following query to a Wordpress database: SELECT * FROM wp_postmeta WHERE post_id = :id I am returned a multidimensional array that looks like this: Array ( [0] => Array ( [meta_id] => 380 [post_id] => 72 [meta_key] => _edit_last [meta_value] => 1 ) ... etc. What is the best way to find a particular key-value pair in this array? For instance, how would I located the row where [meta_key] = event_name so that I can extract that same row's [meta_value] value into a PHP variable? I realize I could turn this into many individual MySQL queries. Does anyone have an opinion of the efficiency of doing 10 SQL queries in a row rather than searching the array 10 times? I would think since the array is in memory, that will be the fastest method to find the values I need. Alternatively, is there a better way to query the database from the beginning so that my result set is formatted in a way that is easier to search?

    Read the article

  • Android OpenGL es "glDrawTexfOES" draws upside down

    - by Alle
    I'm using OpenGL for Android to draw my 2D images. Whenever I draw something using the code: gl.glViewport(aspectRatioOffset, 0, screenWidth, screenHeight); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluOrtho2D(gl, aspectRatioOffset, screenWidth + aspectRatioOffset,screenHeight, 0); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glBindTexture(GL10.GL_TEXTURE_2D, myScene.neededGraphics.get(ID).get(animationID).get(animationIndex)); crop[0] = 0; crop[1] = 0; crop[2] = width; crop[3] = height; ((GL11Ext)gl).glDrawTexfOES(x, y, z, width, height) I get an upside down result. I'v seen people solve this through doing: crop[0] = 0; crop[1] = height; crop[2] = width; crop[3] = -height; This does however hurt the logic in my application, so I would like the result to not be flipped upside down. Does anyone know why it happen, and any way of avoiding or solving it?

    Read the article

  • mySQL select and group by values

    - by Foo
    I'd like to count and group rows by specific values. This seems fairly simple, but I can't seem to do it. I have a table set up similar to this: Table: Ratings id pID uID rating 1 1 2 7 2 1 7 7 3 1 5 4 4 1 1 1 id is the primary key, piD and uID are foreign-keys. Rating contains values between 1 and 10, and only between 1 and 10. I want to run some statistics and count the number of ratings with a certain value. In the example above, two have left a rating of 7. So I wrote the following query: SELECT COUNT(*) AS 'count' , 'rating' FROM 'ratings' WHERE pID= '1' GROUP BY `rating` ORDER BY `rating` Which yields the nice result as: count ratings 1 1 1 4 2 7 I'd like to get the mySQL query to include values between 1 and 10 as well. For example: Desired Result count ratings 1 1 0 2 0 3 1 4 0 5 0 6 2 7 0 8 0 9 0 10 Unfortunately, I'm relatively new to SQL and I've been reading through everything I could get my hands on for the past hour, but I can't get it to work. I've been leaning along the lines of a some type of JOIN. If anyone can point me in the right direction, it'd be appreciated. Thanks.

    Read the article

  • Templates, Function Pointers and C++0x

    - by user328543
    One of my personal experiments to understand some of the C++0x features: I'm trying to pass a function pointer to a template function to execute. Eventually the execution is supposed to happen in a different thread. But with all the different types of functions, I can't get the templates to work. #include `<functional`> int foo(void) {return 2;} class bar { public: int operator() (void) {return 4;}; int something(int a) {return a;}; }; template <class C> int func(C&& c) { //typedef typename std::result_of< C() >::type result_type; typedef typename std::conditional< std::is_pointer< C >::value, std::result_of< C() >::type, std::conditional< std::is_object< C >::value, std::result_of< typename C::operator() >::type, void> >::type result_type; result_type result = c(); return result; } int main(int argc, char* argv[]) { // call with a function pointer func(foo); // call with a member function bar b; func(b); // call with a bind expression func(std::bind(&bar::something, b, 42)); // call with a lambda expression func( [](void)->int {return 12;} ); return 0; } The result_of template alone doesn't seem to be able to find the operator() in class bar and the clunky conditional I created doesn't compile. Any ideas? Will I have additional problems with const functions?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Using Enum in Hibernate causes select followed by an update statement

    - by Leonardo
    Hi all, I have a mapped entity wich has an enum property. By loking at log file, whenever I run a select statement on such entity, the result is an immediately following update. For example if my result set contains 100 records, then I have: [INFO org... select...] [INFO org... update... where id=?] [INFO org... update... where id=?] .... repeated 100 times If I mark the property as update=false the problem disappear. The enum is assigned trough an enum converter class, which I copied from a well known book. So I don't know if I just copy and paste the code. Here it is how is declared on hbm file. <typedef class="mypackage.HbnEnumConverter" name="the_type"> <param name="enumClassname">mypackage.TheType</param> </typedef> Can you point out a direction to investigate this ? Beside, what are the consequences of having update=false on hibernate field ? thanks

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • Custom bean instantiation logic in Spring MVC

    - by Michal Bachman
    I have a Spring MVC application trying to use a rich domain model, with the following mapping in the Controller class: @RequestMapping(value = "/entity", method = RequestMethod.POST) public String create(@Valid Entity entity, BindingResult result, ModelMap modelMap) { if (entity== null) throw new IllegalArgumentException("An entity is required"); if (result.hasErrors()) { modelMap.addAttribute("entity", entity); return "entity/create"; } entity.persist(); return "redirect:/entity/" + entity.getId(); } Before this method gets executed, Spring uses BeanUtils to instantiate a new Entity and populate its fields. It uses this: ... ReflectionUtils.makeAccessible(ctor); return ctor.newInstance(args); Here's the problem: My entities are Spring managed beans. The reason for this is to inject DAOs on them. Instead of calling new, I use EntityFactory.createEntity(). When they're retrieved from the database, I have an interceptor that overrides the public Object instantiate(String entityName, EntityMode entityMode, Serializable id) method and hooks the factories into that. So the last piece of the puzzle missing here is how to force Spring to use the factory rather than its own BeanUtils reflective approach? Any suggestions for a clean solution? Thanks very much in advance.

    Read the article

  • LINQ: display results from empty lists

    - by Douglas H. M.
    I've created two entities (simplified) in C#: class Log { entries = new List<Entry>(); DateTime Date { get; set; } IList<Entry> entries { get; set; } } class Entry { DateTime ClockIn { get; set; } DateTime ClockOut { get; set; } } I am using the following code to initialize the objects: Log log1 = new Log() { Date = new DateTime(2010, 1, 1), }; log1.Entries.Add(new Entry() { ClockIn = new DateTime(0001, 1, 1, 9, 0, 0), ClockOut = new DateTime(0001, 1, 1, 12, 0, 0) }); Log log2 = new Log() { Date = new DateTime(2010, 2, 1), }; The method below is used to get the date logs: var query = from l in DB.GetLogs() from e in l.Entries orderby l.Date ascending select new { Date = l.Date, ClockIn = e.ClockIn, ClockOut = e.ClockOut, }; The result of the above LINQ query is: /* Date | Clock In | Clock Out 01/01/2010 | 09:00 | 12:00 */ My question is, what is the best way to rewrite the LINQ query above to include the results from the second object I created (Log2), since it has an empty list. In the other words, I would like to display all dates even if they don't have time values. The expected result would be: /* Date | Clock In | Clock Out 01/01/2010 | 09:00 | 12:00 02/01/2010 | | */

    Read the article

  • Querying XML using node numbers

    - by CP
    Okay, so I'm writing a utility that compares 2 XML documents using Microsoft's XML diff patch tool. The result looks something like this: <?xml version="1.0" encoding="utf-16"?><xd:xmldiff version="1.0" srcDocHash="10728157883908851288" options="IgnoreChildOrder IgnoreComments IgnoreWhitespace " fragments="yes" xmlns:xd="http://schemas.microsoft.com/xmltools/2002/xmldiff"><xd:node match="1"><xd:node match="1"><xd:node match="1"><xd:node match="2"><xd:node match="1"><xd:node match="1"><xd:node match="2"><xd:change match="1">testi22n2123</xd:change></xd:node></xd:node><xd:add match="/1/1/1/2/1/8" opid="1" /><xd:node match="7"><xd:node match="1"><xd:change match="1">31</xd:change></xd:node><xd:node match="2"><xd:change match="1">test2ing</xd:change></xd:node></xd:node><xd:remove match="8" opid="1" /></xd:node></xd:node></xd:node></xd:node></xd:node><xd:descriptor opid="1" type="move" /></xd:xmldiff> What I'm trying to do is go back into the source document and get the source data that represents the difference. I initially tried creating an Xpath query, but as I understand it now this XmlDiff thing works off the DOM... which seems like the dinosaur of XML objects these days. What's the best way to get at the node in the source XML by using the numbers provided in the diff result?

    Read the article

  • Facebook connect with iPhone not working?

    - by Atulkumar V. Jain
    Hi Everybody, I am trying to use Facebook connect in my application, but its not working as I desire. When I am trying to use the API Key and the API SecretKey of my application which I have registered with the facebook its not working. I have downloaded the code for the facebook. In the SessionViewController.m file when I pass my key values its not working. What I am trying to achieve is, when the app launches the first page is the Facebook Login Page. The user enters his username and password and then the next view should display. But nothing is happening, even the label doesn't display the username. Heres the code which I am using - (void)request:(FBRequest*)request didLoad:(id)result { NSArray* users = result; NSDictionary* user = [users objectAtIndex:0]; NSString* name = [user objectForKey:@"name"]; _label.text = [NSString stringWithFormat:@"Logged in as %@",name]; NSLog(@"Username is :- %@",name); FrontController *main = [[FrontController alloc] init]; [self.view addSubview:main.view]; [main release]; } I am not able to figure out what is wrong with this code. When I try with some other key values such as the key for connect application its working fine. Can anyone help me with this... Thanx in advance...

    Read the article

  • Filtering null values with pig

    - by arianp
    It looks like a silly problem, but I can´t find a way to filter null values from my rows. This is the result when I dump the object geoinfo: DUMP geoinfo; ([longitude#70.95853,latitude#30.9773]) ([longitude#-9.37944507,latitude#38.91780853]) (null) (null) (null) ([longitude#-92.64416,latitude#16.73326]) (null) (null) ([longitude#-9.15199849,latitude#38.71179122]) ([longitude#-9.15210796,latitude#38.71195131]) here is the description DESCRIBE geoinfo; geoinfo: {geoLocation: bytearray} What I'm trying to do is to filter null values like this: geoinfo_no_nulls = FILTER geoinfo BY geoLocation is not null; but the result remains the same. nothing is filtered. I also tried something like this geoinfo_no_nulls = FILTER geoinfo BY geoLocation != 'null'; and I got an error org.apache.pig.backend.executionengine.ExecException: ERROR 1071: Cannot convert a map to a String What am I doing wrong? details, running on ubuntu, hadoop-1.0.3 with pig 0.9.3 pig -version Apache Pig version 0.9.3-SNAPSHOT (rexported) compiled Oct 24 2012, 19:04:03 java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

    Read the article

  • How do I write this GROUP BY in mysql UNION query

    - by user1652368
    Trying to group the results of two queries together. When I run this query: SELECT pr_id, pr_sbtcode, pr_sdesc, od_quantity, od_amount FROM ( SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`, SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `bgOrderMain` JOIN `bgOrderData` JOIN `bgProducts` WHERE `bgOrderMain`.`or_id` = `bgOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` UNION SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`,SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `npOrderMain` JOIN `npOrderData` JOIN `bgProducts` WHERE `npOrderMain`.`or_id` = `npOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` ) TEMPTABLE3; it produces this result +-------+------------+--------------------------+-------------+-----------+ | pr_id | pr_sbtcode | pr_sdesc | od_quantity | od_amount +-------+------------+--------------------------+-------------+-----------+ | 415 | NP13 | Product 13 | 5 | 125 | 1088 | NPAW | Product AW | 4 | 100 | 415 | NP13 | Product 13 | 5 | 125 | 1088 | NPAW | Product AW | 2 | 50 +-------+------------+--------------------------+-------------+-----------+</pre> What I want to get a result that combines those into 2 lines: +-------+------------+--------------------------+-------------+-----------+ | pr_id | pr_sbtcode | pr_sdesc | od_quantity | od_amount +-------+------------+--------------------------+-------------+-----------+ | 415 | NP13 | Product 13 | 10 | 250 | 1088 | NPAW | Product AW | 6 | 150 +-------+------------+--------------------------+-------------+-----------+</pre> So I added GROUP BY pr_id to the end of the query: SELECT pr_id, pr_sbtcode, pr_sdesc, od_quantity, od_amount FROM ( SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`, SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `bgOrderMain` JOIN `bgOrderData` JOIN `bgProducts` WHERE `bgOrderMain`.`or_id` = `bgOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` UNION SELECT `bgProducts`.`pr_id`, `bgProducts`.`pr_sbtcode`, `bgProducts`.`pr_sdesc`,SUM(`od_quantity`) AS `od_quantity`, SUM(`od_amount`) AS `od_amount`, MIN(UNIX_TIMESTAMP(`or_date`)) AS `or_date` FROM `npOrderMain` JOIN `npOrderData` JOIN `bgProducts` WHERE `npOrderMain`.`or_id` = `npOrderData`.`or_id` AND `od_pr` = `pr_id` AND UNIX_TIMESTAMP(`or_date`) >= '1262322000' AND UNIX_TIMESTAMP(`or_date`) <= '1346990399' AND (`pr_id` = '415' OR `pr_id` = '1088') GROUP BY `bgProducts`.`pr_id` ) TEMPTABLE3 GROUP BY pr_id; But that just gives me this: +-------+------------+--------------------------+-------------+-----------+ | pr_id | pr_sbtcode | pr_sdesc | od_quantity | od_amount +-------+------------+--------------------------+-------------+-----------+ | 415 | NP13 | Product 13 | 5 | 125 | 1088 | NPAW | Product AW | 4 | 100 +-------+------------+--------------------------+-------------+-----------+ What am I missing here??

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Why should I abstract my data layer?

    - by Gazillion
    OOP principles were difficult for me to grasp because for some reason I could never apply them to web development. As I developed more and more projects I started understanding how some parts of my code could use certain design patterns to make them easier to read, reuse, and maintain so I started to use it more and more. The one thing I still can't quite comprehend is why I should abstract my data layer. Basically if I need to print a list of items stored in my DB to the browser I do something along the lines of: $sql = 'SELECT * FROM table WHERE type = "type1"';' $result = mysql_query($sql); while($row = mysql_fetch_assoc($result)) { echo '<li>'.$row['name'].'</li>'; } I'm reading all these How-Tos or articles preaching about the greatness of PDO but I don't understand why. I don't seem to be saving any LoCs and I don't see how it would be more reusable because all the functions that I call above just seem to be encapsulated in a class but do the exact same thing. The only advantage I'm seeing to PDO are prepared statements. I'm not saying data abstraction is a bad thing, I'm asking these questions because I'm trying to design my current classes correctly and they need to connect to a DB so I figured I'd do this the right way. Maybe I'm just reading bad articles on the subject :) I would really appreciate any advice, links, or concrete real-life examples on the subject!

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >