Search Results

Search found 110151 results on 4407 pages for 'real time data integratio'.

Page 41/4407 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • SQLAuthority News – Interview with SQL Server MVP Madhivanan – A Real Problem Solver

    - by pinaldave
    Madhivanan (SQL Server MVP) is a real community hero. He is known for his two skills – 1) Help Community and 2) Help Community. I have met him many times and every time I feel if anybody in online world needs help Madhinvanan does his best to reach them out and solve problem. His name is not new if you are ready this blog or have ever asked a question in any online SQL forum. He is always there to help. When Madhivanan has time he even helps people on this blog as well. He spends his valuable time to help community only. He recently crossed over 1000 helpful comments on this blog. On that occasion, I have interviewed him to find out if he has any life outside SQL. Q 1. Tell us something about your self. I am Madhivanan ,an MSc computer Science graduate from Chennai, India and working as a Lead Analyst-Project at Ellaar Infotek Solutions Private Limited. I am basically a developer started with Visual Basic 6.0, SQL Server 2000 and Crystal Report 8. As years go on I started working more on writing queries in SQL Server in most of the projects developed in my company. I have some good level of knowledge in ORACLE, MySQL and PostgreSQL as well. Now I am leading a project develeoped in Windows Azure. Q 2. What motivates you to help people on community and forums. When I got some errors during the application development in my early days of my career, I got good solutions from online forums and weblogs. So I decided to help others if possible. When I visit forums and help people if I know the answer to the questions. I am one of the leading posters at www.sqlteam.com and also a moderator at www.sql-server-performance.com. I also take part in Visual Basic and Crystal Reports forums. I have been SQL Server MVP since 2007. Q 3. Your personal life is not much known. Tell us something about your personal life. I am happily married person. My wife is a B.Pharm graduate. I have a son who is now 18 months old. Q 4. Where can we read further for your community activity. I have a blog at http://beyondrelational.com/blogs/madhivanan where you can find most of my T-sql stuffs Q 5. When not working with SQL what do you do? When not working with SQL, I spend time playing with my son, reading some magazines and watching TV. Madhivanan for your work and help to community, a true salute to you. Hats off my friend. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Elapsed time of running a C program

    - by yCalleecharan
    Hi, I would like to know what lines of C code to add to a program so that it tells me the total time that the program takes to run. I guess there should be counter initialization near the beginning of main and one after the main function ends. Is the right header clock.h? Thanks a lot... Update I have a Win Xp machine. Is it just adding clock() at the beginning and another clock() at the end of the program? Then I can estimate the time difference. Yes, you're right it's time.h. Here's my code: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <share.h> #include <time.h> void f(long double fb[], long double fA, long double fB); int main() { clock_t start, end; start = clock(); const int ARRAY_SIZE = 11; long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE); int i; long double A, B; if (z == NULL) { printf("Out of memory\n"); exit(-1); } A = 0.5; B = 2; for (i = 0; i < ARRAY_SIZE; i++) { z[i] = 0; } z[1] = 5; f(z, A, B); for (i = 0; i < ARRAY_SIZE; i++) printf("z is %.16Le\n", z[i]); free(z); z = NULL; end = clock(); printf("Took %ld ticks\n", end-start); printf("Took %f seconds\n", (double)(end-start)/CLOCKS_PER_SEC); return 0; } void f(long double fb[], long double fA, long double fB) { fb[0] = fb[1]* fA; fb[1] = fb[1] - 1; return; } Some errors with MVS2008: testim.c(16) : error C2143: syntax error : missing ';' before 'const' testim.c(18) :error C2143: syntax error : missing ';' before 'type' testim.c(20) :error C2143: syntax error : missing ';' before 'type' testim.c(21) :error C2143: syntax error : missing ';' before 'type' testim.c(23) :error C2065: 'z' : undeclared identifier testim.c(23) :warning C4047: '==' : 'int' differs in levels of indirection from 'void *' testim.c(28) : error C2065: 'A' : undeclared identifier testim.c(28) : warning C4244: '=' : conversion from 'double' to 'int', possible loss of data and it goes to 28 errors. Note that I don't have any errors/warnings without your clock codes. LATEST NEWS: I unfortunately didn't get a good reply here. But after a search on Google, the code is working. Here it is: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <share.h> #include <time.h> void f(long double fb[], long double fA, long double fB); int main() { clock_t start = clock(); const int ARRAY_SIZE = 11; long double* z = (long double*) malloc(sizeof (long double) * ARRAY_SIZE); int i; long double A, B; if (z == NULL) { printf("Out of memory\n"); exit(-1); } A = 0.5; B = 2; for (i = 0; i < ARRAY_SIZE; i++) { z[i] = 0; } z[1] = 5; f(z, A, B); for (i = 0; i < ARRAY_SIZE; i++) printf("z is %.16Le\n", z[i]); free(z); z = NULL; printf("Took %f seconds\n", ((double)clock()-start)/CLOCKS_PER_SEC); return 0; } void f(long double fb[], long double fA, long double fB) { fb[0] = fb[1]* fA; fb[1] = fb[1] - 1; return; } Cheers

    Read the article

  • Chef: nested data bag data to template file returns "can't convert String into Integer"

    - by Dalho Park
    I'm creating simple test recipe with a template and data bag. What I'm trying to do is creating a config file from data bag that has simple nested information, but I receive error "can't convert String into Integer" Here are my setting file 1) recipe/default.rb data1 = data_bag_item( 'mytest', 'qa' )['test'] data2 = data_bag_item( 'mytest', 'qa' ) template "/opt/env/test.cfg" do source "test.erb" action :create_if_missing mode 0664 owner "root" group "root" variables({ :pepe1 = data1['part.name'], :pepe2 = data2['transport.tcp.ip2'] }) end 2)my data bag named "mytest" $knife data bag show mytest qa id: qa test: part.name: L12 transport.tcp.ip: 111.111.111.111 transport.tcp.port: 9199 transport.tcp.ip2: 222.222.222.222 3)template file test.erb part.name=<%= @pepe1 % transport.tcp.binding=<%= @pepe2 % Error reurns when I run chef-client on my server, [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in from_file' [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in[]',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in block in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:infrom_file' [2013-06-24T19:50:38+00:00] DEBUG: backtrace entry for compile error: '/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in `[]'' [2013-06-24T19:50:38+00:00] DEBUG: Line number of compile error: '19' Recipe Compile Error in /var/chef/cache/cookbooks/config_test/recipes/default.rb TypeError can't convert String into Integer Cookbook Trace: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []' /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file' /var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in `from_file' Relevant File Content: /var/chef/cache/cookbooks/config_test/recipes/default.rb: 12: template "/opt/env/test.cfg" do 13: source "test.erb" 14: action :create_if_missing 15: mode 0664 16: owner "root" 17: group "root" 18: variables({ 19 :pepe1 = data1['part.name'], 20: :pepe2 = data2['transport.tcp.ip2'] 21: }) 22: end 23: I tried many things and if I comment out "pepe1 = data1['part.name'],", then :pepe2 = data2['transport.tcp.ip2'] works fine. only nested data "part.name" cannot be set to @pepe1. Does anyone knows why I receive the errors? thanks,

    Read the article

  • pure-ftpd debian, can't get www-data user working

    - by lynks
    I'm trying to add FTP access to the apache web files, in the past I have done this with an ftpuser and group arrangement. This time I would like to make it possible to login directly as www-data (the default apache user on debian) to make things a bit cleaner. I have checked and re-checked all the common issues; MinUID is set to 1 (www-data has uid 33) www-data has shell set to /bin/bash in /etc/passwd PAMAuthentication is off UnixAuthentication is on I have restarted pure-ftpd using /etc/init.d/pure-ftpd restart My resulting pure-ftpd run is; /usr/sbin/pure-ftpd -l unix -A -Y 1 -u 1 -E -O clf:/var/log/pure-ftpd/transfer.log -8 UTF-8 -B My syslog contains; Oct 7 19:46:40 Debian-60-squeeze-64 pure-ftpd: ([email protected]) [WARNING] Can't login as [www-data]: account disabled And my ftp client is giving me; 530 Sorry, but I can't trust you Am I missing something obvious?

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Any "Magic Tricks" For Getting Data Back After Windows 7 Install

    - by user163757
    My old man installed Windows 7 without making a proper backup, and now realizes he left behind some important data. He did a true "clean install", so there is no Windows.old folder in the root directory. However, I believe the format performed on the hard drive was only a quick format, so I am hoping there is some chance at data recovery. I took his hard drive out, and have spent a majority of the weekend researching data recovery options. I paid $70 for the GetDataBack software, but have had little success with it. I can see all of the files I want to restore, however they appear corrupt when I try to open them. With that all being said, does anyone know of a viable way to recover some of this data, or is it a lost cause all together?

    Read the article

  • raid 0 data recovery?

    - by Fred
    HI All, I have two identical seagate 7200.9 500Gb drives confiured as a RAID 0 spanned disk in windows. One of the drives has lost power and wont spin up at all. I know this normally means death for the data on both drives but i have a cunning plan.. DISK 1 - NO POWER RAID 0 DISK DISK 2 - FULLY FUNCTIONAL RAID 0 DISK DISK 3 - FULLY FUNCTIONAL SPARE DISK Copy the working drive (disk 2) data to a third 500GB DISK (disk 3), remove the logic board from the working disk (disk 2) and replace it with the non working logic board on the broken drive (disk 1) , then hopefully recreate the RAID 0 with disk 1 and disk 3, just long enough to get the data off it. Hope this makes sense, here are my questions: Windows disk manager atm recognises disk 2 but wont let me access it in anyway, therefore copying the data off it (or getting a disk image) cant be done in windows. Does anyone know of any software (in linux or self booting) that would allow me to access this disk? Anyone know of any software that will recreate the spanned drive off two disk images Am i missing any key information that means i definitely shouldn't even bother starting this, i know its a long shot anyway but its worth a try unless i definitely cant do it. The irritating thing is that i am sure its a logic board failure on disk 1 as it simply wont power up at all, suddenly no signs of life, so i am sure the data is intact! Any help would be really appreciated! Thanks

    Read the article

  • Data take on with Drupal 6

    - by Robert MacLean
    We are migrating our current intranet to Drupal 6 and there is a lot of data within the current system which can be classified into: List data, general lists of fields. Common use is phone list of the employees phone numbers. Document repository. Just basically a web version of a file share for documents. I can easily get the data + meta infomation out, but how do I bulk upload the two types of data into Drupal, as uploading the hundred of thousands of items manually is just not acceptable.

    Read the article

  • Summing up spreadsheet data when a column contains “#N/A”

    - by Doris
    I am using Goggle Spreadsheet to work up some historical stock data and I use a Google function (=googlefinance=…) to import the historical closing prices for a stock, then I work with that data further. But, in that list of data generated from the =googlefinance=… function, one of the amounts comes up as #N/A. I don’t know why, but it happens for various symbols that I have tried. When I use a max function on the array, which includes the N/A line, the max function does not come up with anything but an N/A, so the N/A throws off any further functions. I thought I’d create a second column to the right of the imported data in which I can give it an IF function, something like, If ((A1 <0), "0", A1), with the expectation that it would return 0 if cell A1 is the N/A, and the cell value if it is not N/A. However, this still returns N/A. I also tried an IS BLANK function but that resulted in the same NA. Does anyone have any suggestions for a workaround to eliminate the N/A from an array of numbers that I am trying to work with?

    Read the article

  • TDWI World Conference Features Oracle and Big Data

    - by Mandy Ho
    Oracle is a Gold Sponsor at this year's TDWI World Conference Series, held at the Manchester Grand Hyatt in San Diego, California - July 31 to Aug 1. The theme of this event is Big Data Tipping Point: BI Strategies in the Era of Big Data. The conference features an educational look at how data is now being generated so quickly that organizations across all industries need new technologies to stay ahead - to understand customer behavior, detect fraud, improve processes and accelerate performance. Attendees will hear how the internet, social media and streaming data are fundamentally changing business intelligence and data warehousing. Big data is reaching critical mass - the tipping point. Oracle will be conducting the following Evening Workshop. To reserve your space, call 1.800.820.5592 ext 10775. Title...:    Integrating Big Data into Your Data Center (or A Big Data Reference Architecture) Date.:    Wed., August 1, 2012, at 7:00 p.m Venue:: Manchester Grand Hyatt, San Diego, Room Weblogs, Social Media, smart meters, senors and other devices generate high volumes of low density information that isn't readily accessible in enterprise data warehouses and business intelligence applications today. But, this data can have relevant business value, especially when analyzed alongside traditional information sources. In this session, we will outline a reference architecture for big data that will help you maximize the value of your big data implementation. You will learn: The key technologies in a big architecture, and their specific use case The integration point of the various technologies and how they fit into your existing IT environment How effectively leverage analytical sandboxes for data discovery and agile development of data driven solutions   At the end of this session you will understand the reference architecture and have the tools to implement this architecture at your company. Presenter: Jean-Pierre Dijcks, Senior Principal Product Manager Don't miss our booth and the chance to meet with our Big data experts on the exhibition floor at booth #306. 

    Read the article

  • How do you encode Algebraic Data Types in a C#- or Java-like language?

    - by Jörg W Mittag
    There are some problems which are easily solved by Algebraic Data Types, for example a List type can be very succinctly expressed as: data ConsList a = Empty | ConsCell a (ConsList a) consmap f Empty = Empty consmap f (ConsCell a b) = ConsCell (f a) (consmap f b) l = ConsCell 1 (ConsCell 2 (ConsCell 3 Empty)) consmap (+1) l This particular example is in Haskell, but it would be similar in other languages with native support for Algebraic Data Types. It turns out that there is an obvious mapping to OO-style subtyping: the datatype becomes an abstract base class and every data constructor becomes a concrete subclass. Here's an example in Scala: sealed abstract class ConsList[+T] { def map[U](f: T => U): ConsList[U] } object Empty extends ConsList[Nothing] { override def map[U](f: Nothing => U) = this } final class ConsCell[T](first: T, rest: ConsList[T]) extends ConsList[T] { override def map[U](f: T => U) = new ConsCell(f(first), rest.map(f)) } val l = (new ConsCell(1, new ConsCell(2, new ConsCell(3, Empty))) l.map(1+) The only thing needed beyond naive subclassing is a way to seal classes, i.e. a way to make it impossible to add subclasses to a hierarchy. How would you approach this problem in a language like C# or Java? The two stumbling blocks I found when trying to use Algebraic Data Types in C# were: I couldn't figure out what the bottom type is called in C# (i.e. I couldn't figure out what to put into class Empty : ConsList< ??? >) I couldn't figure out a way to seal ConsList so that no subclasses can be added to the hierarchy What would be the most idiomatic way to implement Algebraic Data Types in C# and/or Java? Or, if it isn't possible, what would be the idiomatic replacement?

    Read the article

  • Suggest a good method with least lookup time complexity

    - by Amrish
    I have a structure which has 3 identifier fields and one value field. I have a list of these objects. To give an analogy, the identifier fields are like the primary keys to the object. These 3 fields uniquely identify an object. Class { int a1; int a2; int a3; int value; }; I would be having a list of say 1000 object of this datatype. I need to check for specific values of these identity key values by passing values of a1, a2 and a3 to a lookup function which would check if any object with those specific values of a1, a2 and a3 is present and returns that value. What is the most effective way to implement this to achieve a best lookup time? One solution I could think of is to have a 3 dimensional matrix of length say 1000 and populate the value in it. This has a lookup time of O(1). But the disadvantages are. 1. I need to know the length of array. 2. For higher identity fields (say 20), then I will need a 20 dimension matrix which would be an overkill on the memory. For my actual implementation, I have 23 identity fields. Can you suggest a good way to store this data which would give me the best look up time?

    Read the article

  • Why is my ServiceOperation method missing from my WCF Data Services client proxy code?

    - by Kev
    I have a simple WCF Data Services service and I want to expose a Service Operation as follows: [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class ConfigurationData : DataService<ProductRepository> { // This method is called only once to initialize service-wide policies. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.ReadMultiple | EntitySetRights.ReadSingle); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.UseVerboseErrors = true; } // This operation isn't getting generated client side [WebGet] public IQueryable<Product> GetProducts() { // Simple example for testing return (new ProductRepository()).Product; } Why isn't the GetProducts method visible when I add the service reference on the client?

    Read the article

  • How to Convert multiple sets of Data going from left to right to top to bottom the Pythonic way?

    - by ThinkCode
    Following is a sample of sets of contacts for each company going from left to right. ID Company ContactFirst1 ContactLast1 Title1 Email1 ContactFirst2 ContactLast2 Title2 Email2 1 ABC John Doe CEO [email protected] Steve Bern CIO [email protected] How do I get them to go top to bottom as shown? ID Company Contactfirst ContactLast Title Email 1 ABC John Doe CEO [email protected] 1 ABC Steve Bern CIO [email protected] I am hoping there is a Pythonic way of solving this task. Any pointers or samples are really appreciated! p.s : In the actual file, there are 10 sets of contacts going from left to right and there are few thousand such records. It is a CSV file and I loaded into MySQL to manipulate the data.

    Read the article

  • How to add data manually in core data entity

    - by pankaj
    Hi I am working on core data for the first time. I have just created an entity and attributes for that entity. I want to add some data inside the entity(u can say i want to add data in a table), earlier i when i was using sqlite, i would add data using terminal. But here in core data i am not able to find a place where i can manually add data. I just want to add data in entity and display it in a UITableView. I have gone through the the documentation of core data but it does not explain how to add data manually although it explains how i can add it programmiticaly but i dont need to do it programically. I want to do it manually. Thanks in advance

    Read the article

  • C Run-Time library part 2

    - by b-gen-jack-o-neill
    Hi, I was suggested when I have some further questions on my older ones, to create newer Question and reffer to old one. So, this is the original question: What is the C runtime library? OK, from your answers, I now get thet statically linked libraries are Microsoft implementation of C standart functions. Now: If I get it right, the scheme would be as follow: I want to use printf(), so I must include which just tels compiler there us functio printf() with these parameters. Now, when I compile code, becouse printf() is defined in C Standart Library, and becouse Microsoft decided to name it C Run Time library, it gets automatically statically linked from libcmt.lib (if libcmt.lib is set in compiler) at compile time. I ask, becouse on wikipedia, in article about runtime library there is that runtime library is linked in runtime, but .lib files are linked at compile time, am I right? Now, what confuses me. There is .dll version of C standart library. But I thought that to link .dll file, you must actually call winapi program to load that library. So, how can be these functions dynamically linked, if there is no static library to provide code to tell Windows to load desired functions from dll? And really last question on this subject - are C Standart library functions also calls to winapi even they are not .dll files like more advanced WinAPI functions? I mean, in the end to access framebuffer and print something you must tell Windows to do it, since OS cannot let you directly manipulate HW. I think of it like the OS must be written to support all C standart library functions same way across similiar versions, since they are statically linked, and can differently support more complex WinAPI calls becouse new version of OS can have adjustements in the .dll file.

    Read the article

  • How do I set default values on new properties for existing entities after light weight core data migration?

    - by Moritz
    I've successfully completed light weight migration on my core data model. My custom entity Vehicle received a new property 'tirePressure' which is an optional property of type double with the default value 0.00. When 'old' Vehicles are fetched from the store (Vehicles that were created before the migration took place) the value for their 'tirePressure' property is nil. (Is that expected behavior?) So I thought: "No problem, I'll just do this in the Vehicle class:" - (void)awakeFromFetch { [super awakeFromFetch]; if (nil == self.tirePressure) { [self willChangeValueForKey:@"tirePressure"]; self.tirePressure = [NSNumber numberWithDouble:0.0]; [self didChangeValueForKey:@"tirePressure"]; } } Since "change processing is explicitly disabled around" awakeFromFetch I thought the calls to willChangeValueForKey and didChangeValueForKey would mark 'tirePresure' as dirty. But they don't. Every time these Vehicles are fetched from the store 'tirePressure' continues to be nil despite having saved the context.

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • What is a good approach for a Data Access Layer?

    - by Adil Mughal
    Our software is a customized Human Resource Management System (HRMS) using ASP.NET with Oracle as the database and now we are actually moving to make it a product that supports multiple tenants with their own databases. Our options: Use NHibernate to support Multiple databases and use of OO. But we concern related to NHibernate learning curve and any problem we faced. Make a generalized DAL which will continue working with Oracle using stored procedures and use tools to convert it to other databases such as SQL Server or MySql. There is a risk associated with having to support multiple database-dependent versions of a single script. Provide the software as a Service (SaaS) and maintain the way we conduct business. However there can may be clients who do not want or trust the Cloud or other SaaS business models. With this in mind, what's the best Data access layer technique?

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • Select data from three different tables with null data

    - by user3678972
    I am new in Sql. My question is how to get data from three different tables with null values. I have tried a query as below: SELECT * FROM [USER] JOIN [Location] ON ([Location].UserId = [USER].Id) JOIN [ParentChild] ON ([ParentChild].UserId = [USER].Id) WHERE ParentId=7 which I find from this link. Its working fine but, it not fetches all and each data associated with the ParentId Something like it only fetches data which are available in all tables, but also omits some data which not available in Location tables but it comes under the given ParentId. For example: UserId ParentId 1 7 8 7 For userId 8, there is data available in Location table,so it fetches all data. But there is no data for userId 1 available in Location table, so the query didn't work for this. But I want all and every data. If there is no data for userId then it can return only null columns. Is it possible ?? hope everyone can understand my problem.

    Read the article

  • How does Core Data determine if an NSObjects data can be dropped?

    - by Kevin
    In the app I am working on now I was storing about 500 images in Core Data. I have since pulled those images out and store them in the file system now, but in the process I found that the app would crash on the device if I had an array of 500 objects with image data in them. An array with 500 object ids with the image data in those objects worked fine. The 500 objects without the image data also worked fine. I found that I got the best performance with both an array of object ids and image data stored on the filesystem instead of in core data. The conclusion I came to was that if I had an object in an array that told Core Data I was "using" that object and Core Data would hold on to the data. Is this correct?

    Read the article

  • Java convert time format to integer or long

    - by behrk2
    Hello, I'm wondering what the best method is to convert a time string in the format of 00:00:00 to an integer or a long? My ultimate goal is to be able to convert a bunch of string times to integers/longs, add them to an array, and find the most recent time in the array... I'd appreciate any help, thanks! Ok, based on the answers, I have decided to go ahead and compare the strings directly. However, I am having some trouble. It is possible to have more than one "most recent" time, that is, if two times are equal. If that is the case, I want to add the index of both of those times to an ArrayList. Here is my current code: days[0] = "15:00:00"; days[1] = "17:00:00"; days[2] = "18:00:00"; days[3] = "19:00:00"; days[4] = "19:00:00"; days[5] = "15:00:00"; days[6] = "13:00:00"; ArrayList<Integer> indexes = new ArrayList<Integer>(); String curMax = days[0]; for (int x = 1; x < days.length1; x++) { if (days[x].compareTo(curMax) > 0) { curMax = days[x]; indexes.add(x); System.out.println("INDEX OF THE LARGEST VALUE: " + x); } } However, this is adding index 1, 2, and 3 to the ArrayList... Can anyone help me?

    Read the article

  • Windows Form UserControl design time properties

    - by Raffaeu
    I am struggling with a UserControl. I have a UserControl that represent a Pager and it has a Presenter object property exposed in this way: [Browsable(false)] [DesignSerializationAttribute(DesignSerializationAttribute.Hidden)] public object Presenter { get; set; } The code itself works as I can drag and drop a control into a Windows From without having Visual Studio initializing this property. Now, because in the Load event of this control I call a method of the Presenter that at run-time is null ... I have introduced this additional code: public override void OnLoad(...) { if (this.DesignMode) { base.OnLoad(e); return; } presenter.OnViewReady(); } Now, every time I open a Window that contains this UserControl, Visual Studio modifies the Windows designer code. So, as soon as I open it, VS ask me if I want to save it ... and of course, if I add a control to the Window, it doesn't keep the changes ... As soon as I remove the UserControl Pager the problem disappears ... How should I tackle that in the proper way? I just don't want that the presenter property is initialized at design time as it is injected at runtime ...

    Read the article

  • Live clock javascript starts from my custom time

    - by newworroo
    I was trying to create a live/dynamic clock is based on my custom time instead of system time. There are many scripts, but I couldn't find the clock starts from my custom time. Here is an example that I'm trying to modify. The problem is the seconds doesn't change, and it looks like I need to use ajax. Is there any way to do it without ajax? If not, help me to do it using ajax!!! The reason I don't like ajax method is that another page should be called and refreshed, so it will eat server ram. ex) http://www.javascriptkit.com/script/cut2.shtml Before: <script> function show(){ var Digital=new Date() var hours=Digital.getHours() var minutes=Digital.getMinutes() var seconds=Digital.getSeconds() ... ... After: <script> function show(){ var Digital=new Date() var hours=<?php echo $hr; ?>; var minutes=<?php echo $min; ?>; var seconds=<?php echo $sec; ?>; ... ...

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >