Search Results

Search found 91220 results on 3649 pages for 'data type equivalent'.

Page 270/3649 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • SQL replicaton - collecting data

    - by Cicik
    Hi, I have master SQL server with DB Central and a lot of satellite SQL servers with DB Client. I need to collect data from log tables(LogTable) on Client(each client has own ID in log table) to one big table on Central(LogTableCentral). Data must go only from Client to Central On each Client I want to have only data for this Client I need solution with minimal amount of work on client side because of count of clients Central is MS SQL server Enterprise, Clients are MS SQL server 2005, 2008 Thanks a lot EDIT: data can be collected periodically(for example: every day at 01:00)

    Read the article

  • typedef of a template with a template type as its parameter

    - by bryan sammon
    Im having a problem with a typedef below, I can seem to get it right: template <typename T> struct myclass1 { static const int member1 = T::GetSomeInt(); }; template <int I> struct myclass2 { typedef myclass1< myclass2<I> > anotherclass; static int GetSomeInt(); }; anotherclass MyObj1; // ERROR here not instantiating the class When I try and initialize a anotherclass object, it gives me an error. Any idea what I am doing wrong? There seems to be a problem with my typedef. Any help is appreciated, Thanks Bryan

    Read the article

  • problem with reload data from table view after come back from another view

    - by user129677
    I have a problem in my application. Any help will be greatly appreciated. Basically it is from view A to view B, and then come back from view B. In the view A, it has dynamic data loaded in from the database, and display on the table view. In this page, it also has the edit button, not on the navigation bar. When user tabs the edit button, it goes to the view B, which shows the pick view. And user can make any changes in here. Once that is done, user tabs the back button on the navigation bar, it saves the changes into the NSUserDefaults, goes back to the view A by pop the view B. When coming back to the view A, it should get the new data from the UIUserDefaults, and it did. I user NSLog to print out to the console and it shows the correct data. Also it should invoke the viewWillAppear: method to get the new data for the table view, but it didn't. It even did not call the tableView:numberOfRowsInSection: method. I place a NSLog statement inside this method but didn't print out in the console. as the result, the view A still has the old data. the only way to get the new data in the view A is to stop and start the application. both view A and view B are the subclass of UIViewController, with UITableViewDelegate and UITableViewDataSource. here is my code in the view A : - (void)viewWillAppear:(BOOL)animated { NSLog(@"enter in Schedule2ViewController ..."); // load in data from database, and store into NSArray object //[self.theTableView reloadData]; [self.theTableView setNeedsDisplay]; //[self.theTableView setNeedsLayout]; } in here, the "theTableView" is a UITableView variable. And I try all three cases of "reloadData", "setNeedsDisplay", and "setNeedsLayout", but didn't seem to work. in the view B, here is the method corresponding to the back button on the navigation bar. - (void)viewDidLoad { UIBarButtonItem *saveButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemSave target:self action:@selector(savePreference)]; self.navigationItem.leftBarButtonItem = saveButton; [saveButton release]; } - (IBAction) savePreference { NSLog(@"save preference."); // save data into the NSUSerDefaults [self.navigationController popViewControllerAnimated:YES]; } Am I doing in the right way? Or is there anything that I missed? Many thanks.

    Read the article

  • pointer reference type

    - by Codenotguru
    I am trying to write a function that takes a pointer argument, modifies what the pointer points to, and then returns the destination of the pointer as a reference. I am gettin the following error: cannot convert int***' toint*' in return| Code: #include <iostream> using namespace std; int* increment(int** i) { i++; return &i;} int main() { int a=24; int *p=&a; int *p2; p2=increment(&p); cout<<p2; } Thanks for helping!

    Read the article

  • How to treat an instance variable as an instance of another type in C#

    - by Ben Aston
    I have a simple inheritance heirarchy with MyType2 inheriting from MyType1. I have an instance of MyType1, arg, passed in as an argument to a method. If arg is an instance of MyType2, then I'd like to perform some logic, transforming the instance. My code looks something like the code below. Having to create a new local variable b feels inelegant - is there a way of achieving the same behavior without the additional local variable? public MyType1 MyMethod(MyType1 arg) { if(arg is MyType2) { MyType2 b = arg as MyType2; //use b (which modifies "arg" as "b" is a reference to it)... } return arg; }

    Read the article

  • C# .NET 4.0 and Generics

    - by Mr Snuffle
    I was wondering if anyone could tell me if this kind of behaviour is possible in C# 4.0 I have an object hierarchy I'd like to keep strongly typed. Something like this class ItemBase {} class ItemType<T> where T : ItemBase { T Base { get; set; } } class EquipmentBase : ItemBase {} class EquipmentType : ItemType<EquipmentBase> {} What I want to be able to do to have something like this ItemType item = new EquipmentType(); And I want item.Base to return type ItemBase. Basically I want to know if it's smart enough to strongly typed generic to a base class without the strong typing. Benefit of this being I can simply cast an ItemType back to an EquipmentType and get all the strongly typedness again. I may be thinking about this all wrong...

    Read the article

  • Extend base type and automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? I would prefer to not do any reflection, so using a text template is not out of the question. A solution has been proposed to override SaveChanges and do it there, but in order to achieve this i would either have to use reflection (in which I don't want to do ) or derive a base class. Assuming i go down this route how would I achieve this? For example EXAMPLE_DB_TABLE CODE NAME --Audit Tables CREATE_DATE CREATE_USER UPDATE_DATE UPDATE_USER And if i create a base class public abstract class IUpdatable{ public virtual DateTime CreateDate {set;} public virtual string CreateUser { set;} public virtual DateTime UpdateDate { set;} public virtual string UpdateUser { set;} } The end goal is to be able to do something like... public overrride void SaveChanges(){ //Go through state manager and update audit infromation //FOREACH changed entity in state manager if(entity is IUpdatable){ //If state is created... update create audit. //if state is updated... update update audit } } But I am not sure how I go about generating the code that would extend the interface.

    Read the article

  • XMLReader in silverlight <test /> type tag problem

    - by Ummar
    Hi I am parsing XML in silverlight, in my XML I have one tag is like <test attribute1="123" /> <test1 attribute2="345">abc text</test1> I am using XMLReader to parse xml like using (XmlReader reader = XmlReader.Create(new StringReader(xmlString))) { // Parse the file and display each of the nodes. while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: //process start tag here break; case XmlNodeType.Text: //process text here break; case XmlNodeType.XmlDeclaration: case XmlNodeType.ProcessingInstruction: break; case XmlNodeType.Comment: break; case XmlNodeType.EndElement: //process end tag here break; } } } but the problem is that for test tag no EndElement is received? which is making my whole program logic wrong. (for test1 tag all works fine). Please help me out.

    Read the article

  • Test disk recovery

    - by AIB
    I had a 250GB hard disk having several NTFS partitions. The disk was a dynamic disk (created in windows). Now when I formatted windows (which was in another disk), the dynamic disk is shown as offline. I tried using the testdisk tool to recover the data and created a partial backup. Testdisk is able to list all partitions in the disk. All partitions are shown as type 'D' (Deleted). I want to change the 'D' to 'P' (Primary), 'L'(Logical), 'E' (Extended) appropriately and build a new partition table. If I can write the partition table to disk, the disk will be of 'basic' type and should be readable in all OS. What should be the appropriate partition types? I checked the files on the partitions and no OS was ound. So none of the partitions were bootable. Will randomly selecting P,L,E hurt the data in anyway?

    Read the article

  • Getting Data From Webpages?

    - by fuzzygoat
    When looking to get data from a web page whats the recommended method if the page does not provide a structured data feed? Am I right in thinking that its just a case of doing an NSURLRequest and then hacking what you need out of the responseData(NSData*)? I am not too concerned about the implementation in Xcode, I am more curious about actually collecting the data, before I start coding a "hunt & peck" through a list of data. gary

    Read the article

  • Merging sequences by type With LINQ

    - by jankor
    I want to use LINQ to convert this IEnumerable<int>[] value1ByType = new IEnumerable<int>[3]; value1ByType[0]= new [] { 0}; value1ByType[1]= new [] {10,11}; value1ByType[2]= new [] {20}; var value2ToType = new Dictionary<int,int> { {100,0}, {101,1}, {102,2}, {103,1}}; to this var value2ToValue1 = new Dictionary<int,int> { {100, 0}, {101,10}, {102,20}, {103,11}}; Is there a way to do this with LINQ? Without LINQ I would use multiple IEnumerators, one for each IEnumerable of value1ByType. like this: // create enumerators var value1TypeEnumerators = new List<IEnumerator<int>>(); for (int i = 0; i < value1ByType.Length; i++) { value1TypeEnumerators.Add(value1ByType[i].GetEnumerator()); value1TypeEnumerators[i].MoveNext(); } // create wanted dictionary var value2ToValue1 = new Dictionary<int, int>(); foreach (var item in Value2ToType) { int value1=value1TypeEnumerators[item.Value].Current; value2ToValue1.Add(item.Key, value1); value1TypeEnumerators[item.Value].MoveNext(); } Any Idea how to do this in LINQ?

    Read the article

  • XmlSchemaElement type

    - by user305287
    Hi, I'm generating WSDL of my web service dynamically, but when I set the XmlSchemaType of my elements, the ServiceDescriptor writes the element as an element rather than message, as I need. The code looks like this: XmlSchemaElement schemaElement = new XmlSchemaElement(); XmlSchemaComplexType complexType = new XmlSchemaComplexType(); XmlSchemaSequence sequence = new XmlSchemaSequence(); foreach (XmlSchemaComplexElementParameter param in element.Parameters) { XmlSchemaElement paramElement = new XmlSchemaElement(); paramElement.Name = param.ParameterName; paramElement.SchemaTypeName = new XmlQualifiedName(param.ParameterType, "http://www.w3.org/2001/XMLSchema"); paramElement.MaxOccurs = param.MaxOccurs; paramElement.MinOccurs = param.MinOccurs; sequence.Items.Add(paramElement); } complexType.Particle = sequence; schemaElement.Name = element.MessageName; schemaElement.SchemaType = complexType; Any ideas about this?

    Read the article

  • Save data typed into PDF Form

    - by Manzoor Ahmed
    Hey I have PDF Form which would not let me save the data typed into it. Here is the form: http://www.cic.gc.ca/english/pdf/kits/forms/imm0008egen.pdf I want it to save the data typed into it so that I can email it to my relative. Any ideas? I'm using Acrobat Reader.

    Read the article

  • Linq to DataTable without enumerating fields

    - by Luciano
    Hi, i´m trying to query a DataTable object without specifying the fields, like this : var linqdata = from ItemA in ItemData.AsEnumerable() select ItemA but the returning type is System.Data.EnumerableRowCollection<System.Data.DataRow> and I need the following returning type System.Data.EnumerableRowCollection<<object,object>> (like the standard anonymous type) Any idea? Thanks

    Read the article

  • PHP scripts owned by www-data

    - by matnagel
    I am always running php scripts on a dedicated server as user "webroot". It would be easier for coding and administration if the scripts were owned by www-data, the apache2 user. Also feels more simple and clean. There is no ftp on this box and there are no other users or sites. Why not have the php scripts owned by www-data? If there is anything against it, what is the worst that can happen?

    Read the article

  • Covariance and Contravariance type inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Migrate data from SQL Compact to SQL Server 2008

    - by Martin
    I need to do a one-time migration of data from SQL Server Compact Edition to SQL Server 2008 Express Edition. I'm looking for a tool to do this kind of migration. I've tried using Import and Export Data in SQL Server, but it doesn't let me import from SQL Server Compact Edition. Anyone knows of a easy way to do it?

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Java: matching two different type of array

    - by sling
    Hi, I am doing a password login that requires me to match two array: User and Pass. If user key in "mark" and "pass", it should show successfully. However I have trouble with the String[] input = pass.getPassword(); and the matching of the two arrays. String[] User = {"mark", "susan", "bobo"}; String[] Pass = {"pass", "word", "password"}; String[] input = pass.getPassword(); if(Pass.length == input.length && user.getText().equals(User)) { lblstat.setForeground(Color.GREEN); lblstat.setText("Successful"); } else { lblstat.setForeground(Color.RED); lblstat.setText("Failed"); }

    Read the article

  • How to group data changes by operation with MySQL triggers

    - by Jan-Henk
    I am using triggers in MySQL to log changes to the data. These changes are recorded on a row level. I can now insert an entry in my log table for each row that is changed. However, I also need to record the operation to which the changes belong. For example, a delete operation like "DELETE * FROM table WHERE type=x" can delete multiple rows. With the trigger I can insert an entry for each deleted row into the log table, but I would like to also provide a unique identifier for the operation as a whole, so that the log table looks something like: log_id operation_id tablename fieldname oldvalue newvalue 1 1 table id 1 null 2 1 table type a null 3 1 table id 2 null 4 1 table type a null 5 2 table id 3 null 6 2 table type b null 7 2 table id 4 null 8 2 table type b null Is there a way in MySQL to identify the higher level operation to which the row changes belong? Or is this only possible by means of application level code? In the future it would also be nice to be able to record the transaction to which an operation belongs. Another question is if it is possible to capture the actual SQL query, besides using the query log. I don't think so myself, but maybe I am missing something. It is of course possible to capture these at the application level, but the goal is to keep intrusions to the application level code as minimal as possible. When this is not possible with MySQL, how is this with other database systems? For the current project it is not an option to use something other than MySQL, but it would be nice to know for future projects.

    Read the article

  • Castle Windsor - Resolving a generic implementation to a base type

    - by arootbeer
    I'm trying to use Windsor as a factory to provide specification implementations based on subtypes of XAbstractBase (an abstract message base class in my case). I have code like the following: public abstract class XAbstractBase { } public class YImplementation : XAbstractBase { } public class ZImplementation : XAbstractBase { } public interface ISpecification<T> where T : XAbstractBase { bool PredicateLogic(); } public class DefaultSpecificationImplementation : ISpecification<XAbstractBase> { public bool PredicateLogic() { return true; } } public class SpecificSpecificationImplementation : ISpecification<YImplementation> { public bool PredicateLogic() { /*do real work*/ } } My component registration code looks like this: container.Register( AllTypes.FromAssembly(Assembly.GetExecutingAssembly()) .BasedOn(typeof(ISpecification<>)) .WithService.FirstInterface() ) This works fine when I try to resolve ISpecification<YImplementation>; it correctly resolves SpecificSpecificationImplementation. However, when I try to resolve ISpecification<ZImplementation> Windsor throws an exception: "No component for supporting the service ISpecification'1[ZImplementation, AssemblyInfo...] was found" Does Windsor support resolving generic implementations down to base classes if no more specific implementation is registered?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >