Search Results

Search found 68762 results on 2751 pages for 'data transfer objects'.

Page 31/2751 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Is there an easy way to mass-transfer all files between two computers?

    - by Raven Dreamer
    I'm currently visiting my parents for the Christmas holidays, and as I'm sure is the case of many in the post-tech generation, my arrival brings with it the assumption of personal tech support. I've been tasked with setting up my folks' new computer, and they want to make sure all their files, yes, all their files, get transfered over to the new device. Short of manually dragging each folder in the C:/ directory onto an external hard drive and then out onto the recipient computer, is there an easier / faster way to do this?

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

  • Data Access Layer - static list objects and caching

    - by Truegilly
    Hello, i am devloping a site using .net MVC i have a data access layer which basically consists of static list objects that are created from data within my database. The method that rebuilds this data first clears all the list objects. Once they are empty it then add the data. Here is an example of one of the lists im using. its a method which generates all the UK postcodes. there are about 50 methods similar to this in my application that return all sorts of information, such as towns, regions, members, emails etc. public static List<PostCode> AllPostCodes = new List<PostCode>(); when the rebuild method is called it first clears the list. ListPostCodes.AllPostCodes.Clear(); next it re-bulilds the data, by calling the GetAllPostCodes() method /// <summary> /// static method that returns all the UK postcodes /// </summary> public static void GetAllPostCodes() { using (fab_dataContextDataContext db = new fab_dataContextDataContext()) { IQueryable AllPostcodeData = from data in db.PostCodeTables select data; IDbCommand cmd = db.GetCommand(AllPostcodeData); SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = (SqlCommand)cmd; DataSet dataSet = new DataSet(); cmd.Connection.Open(); adapter.FillSchema(dataSet, SchemaType.Source); adapter.Fill(dataSet); cmd.Connection.Close(); // crete the objects foreach (DataRow row in dataSet.Tables[0].Rows) { PostCode postcode = new PostCode(); postcode.ID = Convert.ToInt32(row["PostcodeID"]); postcode.Outcode = row["OutCode"].ToString(); postcode.Latitude = Convert.ToDouble(row["Latitude"]); postcode.Longitude = Convert.ToDouble(row["Longitude"]); postcode.TownID = Convert.ToInt32(row["TownID"]); AllPostCodes.Add(postcode); postcode = null; } } } The rebuild occurs every 1 hour. this ensures that every 1 hour the site will have fresh set of cached data. the issue ive got is that occasionally if during a rebuild, the server will be hit by a request and an exception is thrown. The exception is "Index was outside the bounds of the array." it is due to when a list is being cleared. ListPostCodes.AllPostCodes.Clear(); - // throws exception - although its not always in regard to this list. Once this exception is thrown application dies, All users are affected. I have to restart the server to fix it. i have 2 questions... If i utilise caching instead of static objects would this help ? Is there any way i can say "while the rebuild is taking place, wait for it to complete until accepting requests" any help is most appricaiated ;) truegilly

    Read the article

  • Fast (non-blocking) way to transfer many files to another server

    - by Nyxynyx
    I am currently attempting to transfer over 1 million files from one server to another. Using wget, it seems to be extremely slow, probably because it starts a new transfer after the previous one has been completed. Question: Is there a faster non-blocking (asynchronous) way to do the transfer? I do not have enough space on the first server to compress the files into tar.gz and transferring them over. Thanks!

    Read the article

  • Hibernate Query for a List of Objects that matches a List of Objects' ids

    - by sal
    Given a classes Foo, Bar which have hibernate mappings to tables Foo, A, B and C public class Foo { Integer aid; Integer bid; Integer cid; ...; } public class Bar { A a; B b; C c; ...; } I build a List fooList of an arbitrary size and I would like to use hibernate to fetch List where the resulting list will look something like this: Bar[1] = [X1,Y2,ZA,...] Bar[2] = [X1,Y2,ZB,...] Bar[3] = [X1,Y2,ZC,...] Bar[4] = [X1,Y3,ZD,...] Bar[5] = [X2,Y4,ZE,...] Bar[6] = [X2,Y4,ZF,...] Bar[7] = [X2,Y5,ZG,...] Bar[8] = ... Where each Xi, Yi and Zi represents a unique object. I know I can iterate fooList and fetch each List and call barList.addAll(...) to build the result list with something like this: List<bar> barList.addAll(s.createQuery("from Bar bar where bar.aid = :aid and ... ") .setEntity("aid", foo.getAid()) .setEntity("bid", foo.getBid()) .setEntity("cid", foo.getCid()) .list(); ); Is there any easier way, ideally one that makes better use of hibernate and make a minimal number of database calls? Am I missing something? Is hibernate not the right tool for this?

    Read the article

  • XP can't read data from transferred HD

    - by Alexander Miller
    Computer A, running XP, died. XP was installed on a fresh HD in computer B. Slaved data-backup HD from A was installed as slave on B. B will not read it; shows only 2 folders, Recycler and System Volume Info. All of these are older machines with IDE drives. What's going on & how can I read/transfer the data from the transferred drive? This was only a trial run. Actually I will need to transfer the master HD from A - which has XP on it - and read from its data partition because (blush) the backup drive was not up to date.

    Read the article

  • How to transfer files from windows 7 pc to vista

    - by Samuel C
    Tried using direct connection using Ethernet cable and using windows easy transfer, but no luck, also tried using ad-hoc, home group, and connecting through a router both wired and wireless but no luck, im getting a little frustrated as I need to transfer these files because im selling one of them this afternoon! All I need to do is transfer some documents and files. The windows 7 pc recognizes the vista pc but vista cant recognize the win 7.

    Read the article

  • creating BeanInfo objects in NetBeans 6.1 does not work for some objects

    - by Coder
    I have recently learned about BeanInfo classes in Java, and have successfully used them to add icons to my custom GUI components which extend swing components such as JTextField, however i have a more specialized GUI component which extends from another one of my GUI components, which then extends from JTextField. Ie. the class hierarchy is of the form "A - B - JTextField". I can create a bean info object that works for class B, but when i click on the bean info editor option in netbeans to create a bean info object for class A, nothing happens. Ie. there is no error pop-up and a bean info object is not created. There isn't much difference between class A and B. Both A and B have default no argument constructors and they are very similar to each other. The only thing i can really think of is that A uses generics and B does not. I would like to create a beaninfo object for class A so that i can add custom icons for that component. Any help would be appreciated. Thanks.

    Read the article

  • Is there any chance that my data will get silently corrupted with a robocopy SMB network transfer?

    - by Archagon
    I'm setting up a NAS box for the first time. At the moment, I have most of my data backed up to a few local hard drives, and I intend to transfer all the data to my NAS over ethernet once the RAID array is setup. Since this is all happening over the network, I'm a bit worried about my data getting corrupted silently during transfer. From what I understand, data generally doesn't get corrupted without notice on local transfers because a checksum is performed at some point by the drive or the OS. (This could be totally wrong.) Does the same thing happen with SMB, or is it up to the transferrer to check the integrity of their data? And if it doesn't happen with SMB, is there a protocol that does ensure data integrity? I know that rsync can checksum a transfer, but I'm on Windows and I already have a robocopy configuration that I like. Will my data be safe or do I have to use an external checksum tool to make sure?

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • Autocad 2014 - Positioning electrical objects for plans (icons and symbols, not actual objects)

    - by zazkapulsk
    I am using Autocad to design my home. I am at a phase where I want to set the location for the switches, lighting fixtures, power sockets, communication sockets etc. I am thinking of something like this http://www.the-house-plans-guide.com/electrical-blueprint-symbols.html. I am not interested in placing specific lighting fixtures or rendering them, just putting the symbol and distances for the symbol. I can use Autocad Electrical, but that's a GIANT overshoot. What am I missing? Thanks.

    Read the article

  • are there any useful datasets available on the web for data mining?

    - by niko
    Hi, Does anyone know any good resource where example (real) data can be downloaded for experimenting statistics and machine learning techniques such as decision trees etc? Currently I am studying machine learning techniques and it would be very helpful to have real data for evaluating the accuracy of various tools. If anyone knows any good resource (perhaps csv, xls files or any other format) I would be very thankful for a suggestion.

    Read the article

  • Put an array of Objects in nodes of another array of Objects [JAVA]

    - by zengr
    public class hello { public static void main(String[] args) { Object[] newarray = new Object[1]; Object[] obj = new Object[2]; obj[0] = "Number1"; //string value obj[1] = "Number2"; //string value newarray[0] = obj; //this works Object[] tmp_obj = new Object[2]; tmp_obj = newarray[0]; //obviously does not work System.out.println(tmp_obj[0]); //nope System.out.println(tmp_obj[1]); //nope } } So, now if I want to access the values "Number1" and "Number2" which are stored in obj[0] and obj[1]; obj is in newarray[0]. what should I do? Is this a possible? Thanks

    Read the article

  • How can I transfer a SQL Server 2005 license?

    - by jdk
    I have the a wrong license number in one SQL Server. What's gone down is this: We virtualized a physical server, effectively cloning its software and licenses - SQL Server included. We want to repurpose the physical machine by keeping SQL Server and modifying its license to another license key that we have purchased. Would prefer not to reinstall SQL Server. Can it be done?

    Read the article

  • Backup and Transfer Foobar2000 to a New Computer

    - by Mysticgeek
    If you are a fan of Foobar2000 you undoubtedly have tweaked it to the point where you don’t want to set it all up again on a new machine. Here we look at how to transfer Foobar2000 settings to a new Windows 7 machine. Note: For this article we are transferring Foobar2000 settings from on Windows 7 machine to another over a network running Windows Home Server.  Foobar2000 Foobar2000 is an awesome music player which is highly customizable and we’ve previously covered. Here we take a look at how it’s set up on the current machine. It’s a nothing flashy, but is set up for our needs and includes a lot of components and playlists.   Backup Files Rather than wasting time setting everything up again on a new machine, we can backup the important files and replace them on the new machine. First type or copy the following into the Explorer address bar. %appdata%\foobar2000 Now copy all of the files in the folder and store them on a network drive or some type removable media or device. New Machine Now you can install the latest version of Foobar2000 on your new machine. You can go with a Standard install as we will be replacing our backed up configuration files anyway. When it launches, it will be set with all the defaults…and we want what we had back. Browse to the following on the new machine… %appdata%\foobar2000 Delete all of the files in this directory… Then replace them with the ones we backed up from the other machine. You’ll also want to navigate to C:\Program Files\Foobar2000 and replace the existing Components folder with the backed up one. When you get the screen telling you there is already files of the same name, select Move and Replace, and check the box Do this for the next 6 conflicts. Now we’re back in business! Everything is exactly as it was on the old machine. In this example, we were moving the Foobar2000 files from a computer on the same home network. All the music is coming from a directory on our Windows Home Server so they hadn’t changed. If you’re moving these files to a computer on another machine… say your work computer, you’ll need to adjust where the music folders point to. Windows XP If you’re setting up Foobar2000 on an XP machine, you can enter the following into the Run line. %appdata%\foobar2000 Then copy your backed up files into the Foobar2000 folder, and remember to swap out the Components folder in C:\Program Files\Foobar2000. Confirm to replace the files and folders by clicking Yes to All… Conclusion This method worked perfectly for us on our home network setup. There might be some other things that will need a bit of tweaking, but overall the process is quick and easy. There is a lot of cool things you can do with Foobar2000 like rip an audio CD to FlAC. If you’re a fan of Foobar2000 or considering switching to it, we will be covering more awesome features in future articles. Download Foobar2000 – Windows Only Similar Articles Productive Geek Tips Backup or Transfer Microsoft Office 2007 Quick Parts Between ComputersBackup and Restore Internet Explorer’s Trusted Sites ListSecond Copy 7 [Review]Backup and Restore Firefox Profiles EasilyFoobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

  • Why aren't my objects sorting with sortedArrayUsingDescriptors?

    - by clozach
    I expected the code below to return the objects in imageSet as a sorted array. Instead, there's no difference in the ordering before and after. NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"imageID" ascending:YES]; NSSet *imageSet = collection.images; for (CBImage *image in imageSet) { NSLog(@"imageID in Set: %@",image.imageID); } NSArray *imageArray = [[imageSet allObjects] sortedArrayUsingDescriptors:(descriptor, nil)]; [descriptor release]; for (CBImage *image in imageArray) { NSLog(@"imageID in Array: %@",image.imageID); } Fwiw, CBImage is defined in my core data model. I don't know why sorting on managed objects would work any differently than on "regular" objects, but maybe it matters. As proof that @"imageID" should work as the key for the descriptor, here's what the two log loops above output for one of the sets I'm iterating through: 2010-05-05 00:49:52.876 Cover Browser[38678:207] imageID in Array: 360339 2010-05-05 00:49:52.876 Cover Browser[38678:207] imageID in Array: 360337 2010-05-05 00:49:52.877 Cover Browser[38678:207] imageID in Array: 360338 2010-05-05 00:49:52.878 Cover Browser[38678:207] imageID in Array: 360336 2010-05-05 00:49:52.879 Cover Browser[38678:207] imageID in Array: 360335 ... For extra credit, I'd love to get a general solution to troubleshooting NSSortDescriptor troubles (esp. if it also applies to troubleshooting NSPredicate). The functionality of these things seems totally opaque to me and consequently debugging takes forever.

    Read the article

  • How to marshal an object and its content (also objects)

    - by Waldo Spek
    I have a question for which I suspect the answer is a bit complex. At this moment I am programming a DLL (class library) in C#. This DLL uses a 3rd party library and therefore deals with 3rd party objects of which I do not have the source code. Now I am planning to create another DLL, which is going to be used in a later stadium in my application. This second DLL should use the 3rd party objects (with corresponding object states) created by the first DLL. Luckily the 3rd party objects extend the MarshalByRefObject class. I can marshal the objects using System.Runtime.Remoting.Marshal(...). I then serialize the objects using a BinaryFormatter and store the objects as a byte[] array. All goes well. I can deserialize and unmarshal in a the opposite way and end up with my original 3rd party objects...so it appears... Nevertheless, when calling methods on my 3rd party deserialized objects I get object internal exceptions. Normally these methods return other 3rd party objects, but (obviously - I guess) now these objects are missing because they weren't serialized. Now my global question: how would I go about marshalling/serializing all the objects which my 3rd party objects reference...and cascade down the "reference tree" to obtain a full and complete serialized object? Right now my guess is to preprocess: obtain all the objects and build my own custom object and serialize it. But I'm hoping there is some other way...

    Read the article

  • Get type of the parameter from list of objects, templates, C++

    - by CrocodileDundee
    This question follows to my previous question Get type of the parameter, templates, C++ There is the following data structure: Object1.h template <class T> class Object1 { private: T a1; T a2; public: T getA1() {return a1;} typedef T type; }; Object2.h template <class T> class Object2: public Object1 <T> { private: T b1; T b2; public: T getB1() {return b1;} } List.h template <typename Item> struct TList { typedef std::vector <Item> Type; }; template <typename Item> class List { private: typename TList <Item>::Type items; }; Is there any way how to get type T of an object from the list of objects (i.e. Object is not a direct parameter of the function but a template parameter)? template <class Object> void process (List <Object> *objects) { typename Object::type a1 = objects[0].getA1(); // g++ error: 'Object1<double>*' is not a class, struct, or union type } But his construction works (i.e. Object represents a parameter of the function) template <class Object> void process (Object *o1) { typename Object::type a1 = o1.getA1(); // OK }

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >