Search Results

Search found 89075 results on 3563 pages for 'data files'.

Page 63/3563 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Analyze VS2010 C# projects and report files on disk not part of the projects?

    - by Lasse V. Karlsen
    I discovered earlier tonight that files and folders I have removed from my C# projects are apparently still on disk, even though my Visual Studio Mercurial plugin seems to do a good job of deleting them when I delete them in Visual Studio. It must have hickuped when it came to these files. So I wondered... Does anyone have a script or similar, or know of something, that will look at my .csproj files and report extra files and folders on my disk that isn't part of the project files? I just want to clean up my repository contents.

    Read the article

  • Possible ways of representing data in memory (.net)

    - by This is it
    Hi What are the possible ways of representing data in memory in .Net (or in general)? It would be great if data could be sorted and looked up by key (or multiple keys). We are thinking to use collections, arrays, list of collections/arrays. One object would be in several collections (one sorted asc, other desc, etc.). Maybe this is not a good idea, and we would like to hear some other possible solutions. Thank you

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • How to find hidden/cloak files in Windows 2003?

    - by homemdelata
    Here is the point. I set Windows to display all the hidden files and protected operating system files but even after that, my antivirus (Kaspersky) is still getting a ".dll" file on "c:\windows\system32" saying it's a riskware 'Hidden.Object'. I tried to find this file everytime but it's not there. So I asked one of the developers to create a service that verifies the folder each 5 seconds and, if it founds the file, copies to another place. If it copies to another place with the same name and extension, I still can't find the file on the other folder but Kaspersky now founds both. If I keep the same name but with a different extension, like ".temp123", I still can't find the file. Lastly, I created an empty text file and renamed with the same name as the other one, the file just gone too. After all this research It's clear that every file with this same name on this specific server gets cloak, doesn't matter the file extension. I created a file with this same name on another server and nothing happens, the file is still there without problem. How can I found this kind of file? How can I "uncloak" it? How can I know what this file is doing?

    Read the article

  • Transferring data from Salesforce using Apex Data Loader to Oracle

    - by Barret
    While attempting to transfer data from Salesforce using Apex Data Loader to Oracle Keep getting the following error: 26937 [databaseAccountExtract] FATAL com.salesforce.dataloader.dao.database.Data baseContext - Error getting value for SQL parameter: nkey__c. Please make sure that the value exists in the configuration file or is passed in. Database conf iguration: insertAccount. The database-conf.xml has the following beans: <bean id="insertAccount" class="com.salesforce.dataloader.dao.database.DatabaseConfig" singleton="true"> <property name="sqlConfig" ref="insertAccountSql"/> <property name="dataSource" ref="dbDataSource"/> </bean> <bean id="insertAccountSql" class="com.salesforce.dataloader.dao.database.SqlConfig" singleton="true"> <property name="sqlString"> <value> INSERT INTO VANTROPO.SF_ACCOUNTCHANNEL (nkey__c) VALUES (@nkey__c@) </value> </property> <property name="sqlParams"> <map> <entry key="nkey__c" value="java.lang.String"/> </map> </property> </bean> The SDL (mapping file) has the following values: # Account Insert Mapping values for query from Salesforce (left) and insert/update to Oracle (right) # SalesforceFieldName=OracleFieldName nkey__c=NKEY__C Any help appreciated.

    Read the article

  • Error using Dynamic Data Filtering: missing datasource

    - by sebastiaan
    I am trying to use the ASP.NET Dynamic Data Filtering project, but I'm running into a problem during the configuration. I'm following the instructions on the author's blog, and everything works like described. Then it tells me to change the datasource using the designer view. I am told to select the "GridDataSource" in the "Configure data source" wizard. This option is not there though. I get all of the classes in my project, including the DataContext that was generated by Linq. When I choose "Show only DataContext objects", the dropdown ("Choose your context object:") is completely empty. When I turn of the checkbox and choose my DataContext class, I get asked which table I want and all that. But, as the whole purpose of a Dynamic Data site is NOT to use one single table, that's not much help. So I've looked at the instructions again and copied the resulting datasource from the example: <asp:DynamicLinqDataSource ID="GridDataSource" runat="server" EnableDelete="True" EnableUpdate="True"></asp:DynamicLinqDataSource> Which is exactly what I had, without the "WhereParameters" nodes in there. Now, when I run the list page however, I get an exception about a missing datasource from the filtering component. Of course when I remove the DynamicFilterRepeater, it works again. This is the meat of the exception: [InvalidOperationException: Missing DataSource] Catalyst.Web.DynamicData.DynamicFilterRepeater.GetTable() in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:74 Catalyst.Web.DynamicData.DynamicFilterRepeater.GetFilters() in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:81 Catalyst.Web.DynamicData.DynamicFilterRepeater.OnInit(EventArgs e) in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:106 How do I make the DynamicFilterRepeater recognize my datasource? I'm using VS2010 Pro, on a Win7 machine.

    Read the article

  • RESTful WCF Data Service Authentication

    - by Adrian Grigore
    Hi, I'd like to implement a REST api to an existing ASP.NET MVC website. I've managed to set up WCF Data services so that I can browse my data, but now the question is how to handle authentication. Right now the data service is secured via the site's built in forms authentication, and that's ok when accessing the service from AJAX forms. However, It's not ideal for a RESTful api. What I would like as an alternative to forms authentication is for the users to simply embed the user name and password into the url to the web service or as request parameters. For example, if my web service is usually accessible as http://localhost:1234/api.svc I'd like to be able to access it using the url http://localhost:1234/api.svc/{login}/{password} So, my questions are as follows: Is this a sane approach? If yes, how can I implement this? It seems trivial redirecting GET requests so that the login and password are attached as GET parameters. I also know how to inspect the http context and use those parameters to filter the results. But I am not sure if / how the same approach could be applied to POST, PUT and DELETE requests. Thanks, Adrian

    Read the article

  • asp.net Dynamic Data Site with own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I there are 2 Ms Sql tables (I've created), sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns: ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null I want that, in Details/Edit/Insert/List/ListDetails pages use as MetaData sysColumns and sysTableData. Because, for ex. in DetailsPage fullName, it is not beatiful as Full Name . someIdea, is it possible? thanks Updated:: In List Page to display data from sysTables (metaData table) I've modified <h2 class="DDSubHeader"><%= tableName%></h2>. public string tableName; protected void Page_Init(object sender, EventArgs e) { table = DynamicDataRouteHandler.GetRequestMetaTable(Context); //added by me uqsikDataContext sd=new uqsikDataContext(); tableName = sd.sysTables.Where(n => n.sysTableName == table.DisplayName).FirstOrDefault().TableName; //end GridView1.SetMetaTable(table, table.GetColumnValuesFromRoute(Context)); GridDataSource.EntityTypeName = table.EntityType.AssemblyQualifiedName; if (table.EntityType != table.RootEntityType) { GridQueryExtender.Expressions.Add(new OfTypeExpression(table.EntityType)); } } so, what about sysColums? How can I get Data from my sysColumns table?

    Read the article

  • How many files in a directory is too many?

    - by Kip
    Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.) Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any affect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user. I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.

    Read the article

  • Jasper report exports empty data in PDF format when there is more data

    - by stanley
    I have a report to be exported in excel, pdf and word using jasper reports. I use xml file as the DataSource for the report, but when the data increases jasper report exports empty file in only for PDF format, when i reduce the data content it export the data available correctly. is there any limitation to pdf size? , how can we manage the size in jasper reports from java? My jrxml is really big, so i cannot add it here, i have added my java code which i use to export the content:- JRAbstractExporter exporter = null; if (format.equals("pdf")) { exporter = new JRPdfExporter(); jasperPrint.setPageWidth(Integer.parseInt( pWidth )); } else if (format.equals("xls")) { exporter = new JRXlsExporter(); } else if (format.equals("doc")) { jasperPrint.setPageWidth(Integer.parseInt( pWidth )); } exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, outputStream_); exporter.exportReport(); contents = outputStream_.toByteArray(); response.setContentType("application/"+format); response.addHeader("Content-disposition", "attachment;filename=" + name.toString() + "."+format);

    Read the article

  • Data loss when downloading data from LDAP server

    - by Ricky D'Amelio
    Hi there. This question comes from a previous one I asked about handling NSData objects: http://stackoverflow.com/questions/2453785/converting-nsdata-to-an-nsstring-representation-is-failing. I have reached the point where I am taking an NSImage, turning it into NSData and uploading those data bytes to the LDAP server. I am doing this like so; //connected successfully to LDAP server above... struct berval photo_berval; struct berval *jpegPhoto_values[2]; photo_berval.bv_len = [photo length]; photo_berval.bv_val = [photo bytes]; jpegPhoto_values[0] = &photo_berval; jpegPhoto_values[1] = NULL; mod.mod_type = "jpegPhoto"; mod.mod_op = LDAP_MOD_REPLACE|LDAP_MOD_BVALUES; mod.mod_bvalues = jpegPhoto_values; mods[0] = &mod; mods[1] = NULL; //perform the modify operation rc = ldap_modify_ext_s(ld, givenModifyEntry, mods, NULL, NULL); That happens with no errors, and you can see a big blob of data when you're in the command line. My problem is, when I go to access the same data at a later stage, I am getting an image file back that's about 120 times smaller than the original image. //find the jpegPhoto attribute photoA = ldap_first_attribute(ld, photoE, &photoBer); while (strcasecmp(photoA, "jpegphoto") != 0) { photoA = ldap_next_attribute(ld, photoE, photoBer); } //get the value of the attribute if ((list_of_photos = ldap_get_values_len(ld, photoE, photoA)) != NULL) { //get the first JPEG photo_data = *list_of_photos[0]; selectedPictureData = [NSData dataWithBytes:&photo_data length:sizeof(photo_data)]; [selectedPictureData writeToFile:@"/Users/username/Desktop/Photo 2.jpg" atomically:YES]; NSLog (@"%@", selectedPictureData); Has anyone successfully done this before or can anyone see what I might be doing wrong? I appreciate anyone's help. Sorry to post so many questions! Ricky.

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

  • Project setup for an ADO.NET/WCF DataService

    - by Slauma
    I'd like to implement a ADO.NET/WCF DataService and I am wondering what's the best way to setup a project in VS2008 SP1 for this purpose. Currently I have an ASP.NET web application project (not of "WebSite" project type). The data access layer is an Entity model (EF version 1) with SQL Server database. I have the Entity Model in a separate DLL project and the web application project references to this assembly for all data accesses. The ADO.NET/WCF DataService needs to communicate with the Entity model/database as well. It has to be hosted on the same web server (IIS 7.5) together with the web application. Since the DataService is not directly related to that specific web application (though it will provide and modify data from/in the same database the web application uses as well) my basic idea was to separate the DataService in its own new project (which also references the Entity Model DLL). Now I have seen that there is no project type "ADO.NET/WCF DataService" in VS2008 SP1. It seems only possible to add a DataService as an element to other existing projects, for instance Web Application projects. Why isn't there a separate DataService project type? Does this mean now that I have to add the DataService as an element to my Web Application project? Or shall I create a new Web Application project and add a DataService to it? (I could delete the pregenerated default.aspx since I do not need any web pages in this project.) What's the best way? Thank you for suggestions in advance!

    Read the article

  • Core Data 1-to-many relationship: List all related objects as section header in UITableView

    - by Snej
    Hi: I struggle with Core Data on the iPhone about the following: I have a 1-to-many relationship in Core Data. Assume the entities are called recipe and category. A category can have many recipes. I accomplished to get all recipes listed in a UITableView with section headers named after the category. What i want to achieve is to list all categories as section header, even those which have no recipe: category1 <--- this one should be displayed too category2 recipe_x recipe_y category3 recipe_z NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Recipe" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; [fetchRequest setFetchBatchSize:10]; NSSortDescriptor *sortDescriptor1 = [[NSSortDescriptor alloc] initWithKey:@"category.categoryName" ascending:YES]; NSSortDescriptor *sortDescriptor2 = [[NSSortDescriptor alloc] initWithKey:@"recipeName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor1,sortDescriptor2, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"category.categoryName" cacheName:@"Recipes"]; What is the most elegant way to achieve this with core data?

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • Core Data - Entity Relationships Not Working as expected

    - by slimms
    I have set up my data model in xcode like so EntityA AttA1 AttA2 EntityB AttB1 AttB2 AttB3 I then set up the relationships EntityA Name: rlpToEntityB Destination: EntityB Inverse: rlpToEntityA To Many: Checked EntityB Name: rlpToEntityA Destination: EntityA Inverse: rlpToEntityB To Many: UnChecked i.e. relationship between the two where Each one of EntityA can have many EntityB's It is my understanding that if i fetch a subset of EntityB's I can then retrieve the values for the related EntityA's. I have this working so that i can retrieve the EntityB values using NSManagedObject *objMO = [fetchedResultsController objectAtIndexPath:indexPath]; strValueFromEntityB = [objMO valueForKey:@"AttB1"]; However, if I try to retrieve a related value from EntityA by doing the following strValueFromEntityA = [objMO valueForKey:@"AttA1"]; I get the error "The entity EntityB is not Key value coding-compliant for the key Atta1" Not surprisingly i suppose if i switch things around to fetch from EntityA i cannot access attributes of EntityB So it appears the defined relationshipare being ignored. Can anyone spot what i am doing wrong? I confess im very new to iPhone programming and especially to Core Data so please go easy on me and provide verbose explanations or point me in the direction a specific resource. I have downloaded the apple sample apps (Core Data Books, Top Songs and recipes) but I still can't work this out. Thanks in advance, Nev.

    Read the article

  • WCF Data Services - neither .Expand or .LoadProperty seems to do what I need

    - by TomK
    I am building a school management app where they track student tardiness and absences. I've got three entities to help me in this. A Students entity (first name, last name, ID, etc.); a SystemAbsenceTypes entity with SystemAbsenceTypeID values for Late, Absent-with-Reason, Absent-without-Reason; and a cross-reference table called StudentAbsences (matching the student IDs with the absence-type ID, plus a date, and a Notes field). What I want to do is query my entities for a given student, and then add up the number of each kind of Absence, for a given date range. I prepare my currentStudent object without a problem, then I do this... Me.Data.LoadProperty(currentStudent, "StudentAbsences") 'Loads the cross-ref data lblDaysLate.Text = (From ab In currentStudent.StudentAbsences Where ab.SystemAbsenceTypes.SystemAbsenceTypeID = Common.enuStudentAbsenceTypes.Late).Count.ToString ...and this second line fails, complaining it has no value for an object. I presume the problem is that while it DOES see that there are (let's say) four absences for the currentStudent (ie, currentStudent.StudentAbsences.Count = 4) -- it can't yet "peer into" each one of the absences to look at its type. How do I use .Expand or .LoadProperty to make this happen? I tried fiddling with .LoadProperty but it doesn't take a two-level syntax like so... Data.LoadProperty(currentStudent,"StudentAbsences.SystemAbsenceTypeID") or the like. Is there some other technique?

    Read the article

  • NSUndoManager with Core Data - Redo not working

    - by CJ
    I have a Core Data document-based app which support undo/redo via the built-in NSUndoManager associated with the NSManagedObjectContext. I have a few actions set up which perform numerous tasks within Core Data, wrap all these tasks into an undo group via beginUndoGrouping/endUndoGrouping, and are processed by the NSUndoManager. Undo works fine. I can perform several successive actions, and each then undo each one of them successively and my app's state is maintained correctly. However, the "Redo" menu item is never enabled. This means that the NSUndoManager is telling the menu that there are no items to redo. I am wondering why the NSUndoManager is seemingly forgetting about items once they are undone, and not allowing redos to occur? One thing I should mention is that I'm disabling undo registration after a document is opened/created. When I perform an action, I call enableUndoRegistration, beginUndoGrouping, perform the action, then call processPendingChanges, setActionName:, endUndoGrouping, and finally disableUndoRegistration. This makes sure that only specific actions are undoable, and any other data changes I make outside of these go unnoticed to the NSUndoManager. This may be a part of the issue, but if so I'm wondering why it's affecting redo? Thanks in advance.

    Read the article

  • Regressing panel data in SAS.

    - by John
    Hey Guys, thanks to your help I succesfully managed all my databases! I am now looking at a panel data set on which I have to regress. Since I only started my Phd this semester together with the econometrics courses I am still new to many statistic applications and regression methods. I want to do a simple regression as in Y = x1 x2 x3 etc, now I already browsed through some literature and found that for panel data it's common to do a fixed effects regression. Also, my Y variable only has positive values so I was thinking in the direction of a Tobit model? I'm doing some research concerning the coverage of analysts in the financial business. My independent variable is the coverage of analysts on a certain firm, so per observation i have 1 analyst and 1 firm, together with different characteristics(market cap and betas etc) of the firm. All this data is monthly. As coverage cannot become negative (only 0) I was thinking of a Tobit model? Do you guys have any ideas what would be a good regression method? Or have some good sources (e books, written books, through university I have access to almost anything concerning my field of work) of information (cause I do have to learn these things for future research)? Thanks!

    Read the article

  • Creating LINQ to SQL Data Models' Data Contexts with ASP.NET MVC

    - by Maxim Z.
    I'm just getting started with ASP.NET MVC, mostly by reading ScottGu's tutorial. To create my database connections, I followed the steps he outlined, which were to create a LINQ-to-SQL dbml model, add in the database tables through the Server Explorer, and finally to create a DataContext class. That last part is the part I'm stuck on. In this class, I'm trying to create methods that work around the exposed data. Following the example in the tutorial, I created this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MySite.Models { public partial class MyDataContext { public List<Post> GetPosts() { return Posts.ToList(); } public Post GetPostById(int id) { return Posts.Single(p => p.ID == id); } } } As you can see, I'm trying to use my Post data table. However, it doesn't recognize the "Posts" part of my code. What am I doing wrong? I have a feeling that my problem is related to my not adding the data tables correctly, but I'm not sure. Thanks in advance.

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • Pass a data.frame column name to a function

    - by Kevin Middleton
    I'm trying to write a function to accept a data.frame (x) and a column from it. The function performs some calculations on x and later returns another data.frame. I'm stuck on the best-practices method to pass the column name to the function. The two minimal examples fun1 and fun2 below produce the desired result, being able to perform operations on x$column, using max() as an example. However, both rely on the seemingly (at least to me) inelegant (1) call to substitute() and possibly eval() and (2) the need to pass the column name as a character vector. fun1 <- function(x, column){ do.call("max", list(substitute(x[a], list(a = column)))) } fun2 <- function(x, column){ max(eval((substitute(x[a], list(a = column))))) } df <- data.frame(A = 1:20, B = rnorm(10)) fun1(df, "B") fun2(df, "B") I would like to be able to call the function as fun(df, B), for example. Other options I have considered but have not tried: Pass column as an integer of the column number. I think this would avoid substitute(). Ideally, the function could accept either. with(x, get(column)), but, even if it works, I think this would still require substitute Make use of formula() and match.call(), neither of which I have much experience with. Subquestion: Is do.call() preferred over eval()? Thanks, Kevin

    Read the article

  • LOAD DATA INFILE with variables

    - by Hasitha
    I was tring to use the LOAD DATA INFILE as a sotred procedure but it seems it cannot be done. Then i tried the usual way of embedding the code to the application itself like so, conn = new MySqlConnection(connStr); conn.Open(); MySqlCommand cmd = new MySqlCommand(); cmd = conn.CreateCommand(); string tableName = getTableName(serverName); string query = "LOAD DATA INFILE '" + fileName + "'INTO TABLE "+ tableName +" FIELDS TERMINATED BY '"+colSep+"' ENCLOSED BY '"+colEncap+"' ESCAPED BY '"+colEncap+"'LINES TERMINATED BY '"+colNewLine+"' ("+colUpFormat+");"; cmd.CommandText = query; cmd.ExecuteNonQuery(); conn.Close(); The generated query that gets saved in the string variable query is, LOAD DATA INFILE 'data_file.csv'INTO TABLE tbl_shadowserver_bot_geo FIELDS TERMINATED BY ',' ENCLOSED BY '"' ESCAPED BY '"'LINES TERMINATED BY '\n' (timestamp,ip,port,asn,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,url,agent,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy); But now when i run the program it gives an error saying, MySQLException(0x80004005) Parameter @dummy must be defined first. I dont know how to get around this, but when i use the same query on mysql itself directly it works fine. PLEASE help me..thank you very much :)

    Read the article

  • Using Constraints on Hierarchical Data in a Self-Referential Table

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to update on cascades and prevent deletion of fruit if they have "children." So I used the following: CREATE TABLE `idtlp_main`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`iddoc_main`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • iPhone application update (using Core Data on Sqlite)

    - by owen
    I have an app which is using Core Data on Sqlite, Now I have a update which has some DB structure changes say adding a new table I know when an app get updated, it updates the app binary only, nothing on document directory will be changed. When the app gets updated and launchs at the first time and run [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; It will find the difference between the data model and DB structure in Sqlite and will throw an exception and quit. Error: "The model used to open the store is incompatible with the one used to create the store" So, can anyone here give me some idea how to update an app when there is a DB structure change? I think we can run a DB script to create that new table when it launchs the update at the first time. But if there are other changes like changing the type of some fields or deleting some fields, and we need to migrate the old data, this is really a headache. In this case, Is the only way to do is creating a new app? Is there anyone tried something similar like this?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >