Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 92/2466 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • How to Implement Complex Form Data?

    - by SoulBeaver
    I'm supposed to implement a relatively complex form that looks like follows, but has at least four more pages requiring the user to fill in all necessary information for the tracks: This data will need to be sent to the server, which is implemented using Dropwizard. I'm looking for best practices on how to upload and send such a complex form with potentially dozens of songs to the server. The simplest available solution I have seen is a simple multipart/form-data request with the following form schema (Source): Client <html> <body> <h1>File Upload with Jersey</h1> <form action="rest/file/upload" method="post" enctype="multipart/form-data"> <p> Select a file : <input type="file" name="file" size="45" /> </p> <input type="submit" value="Upload It" /> </form> </body> </html> Server @POST @Path("/upload") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadTrack(final FormDataMultiPart multiPart) { List<FormDataBodyPart> artists = multiPart.getFields("artist"); StringBuffer output = new StringBuffer(); for (FormDataBodyPart artist : artists) output.append(artist.getValueAs(String.class)); List<FormDataBodyPart> tracks = multiPart.getFields("track"); for (FormDataBodyPart track : tracks) writeToFile(track.getValueAs(InputStream.class), "Foo"); return Response.status(200).entity(output.toString()).build(); } Then I have also read about file uploads via Ajax or Formdata (Mozilla HttpRequest) which allows for Posts in the formats application/x-www-form-urlencoded, multipart/form-data, or text/plain. I don't know which approach, if any, is best. An ideal solution would be to utilize Jackson to convert a json string into my data objects, but I don't get the impression that this is possible with binary data.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Business Intelligence (BI) Defined

    CIO.com defines Business Intelligence (BI) as a generic reference to a collection of applications that are used to analyze raw organizational data. Typical BI activities include data mining, online analytical processing, querying and reporting. They further explain that the primary reason why a company would utilize BI is to make their more data accessible. The more accessible data is to the users the faster they can identify ways to reduce business cost, discover new business opportunities, and react quickly to adjust prices based on current supply and demand. One area in which a hospital system could use BI derived from a data warehouse can be seen in the Emergency Room (ER) in regards to the number of doctors and nurse they have working during a full moon for each ER location. In order determine this BI needs to determine a trend in the number of patients seen on a full moon, further more they also need to determine the optimal number of staff members working during a full moon be determining the number of employees to patients ration needed to meet standard patient times and also be the most cost effective for the hospital.  This will allow the hospital system to estimate the number of potential patients they will have on the next full moon and adjust their staff schedules accordingly to ensure that patient care is not affected in any way do the influx or lack of influx of patients during this time while also ensuring that they are only working the minimum number of employees to ensure that they still making a profit. Another area where a hospital system could use BI data regards their orders paced to drug and medical supply companies. BI could define trends in prescriptions given to patients, this information could be used for ordering new supplies and forecasting the amount of medicine each hospital needs to keep on site at a given time. For example, a hospital might want to stock up on materials need to set bones in a cast prior to the summer because their BI indicates that a majority of broken bones occur during the summer due to children being out of school and they have more free time.

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

  • MS Access 2003: Can data disappear from records and how do I test for this and prevent it?

    - by user328960
    Problem and about the database: Data from a record in Access 2003 database has disappeared. This database has 1 backend and 3 frontends, multiple users and is hosted on Citrix. Within this database, we have records of all clients served, ranging in the 1000s. Background info: The form for client data entry is set up with various subforms, including both a "programs enrolled" subform and a "services" subform. A client can be enrolled in multiple programs. Once enrolled in a program, services can be entered for that program area using the services subform. There are multiple fields in the services subform, one of which is a drop-down field allowing you to choose from the programs a client has been enrolled in (the list is updated for that client whenever he is enrolled in a new program). The problem details: For one specific record and one specific program area, the program has disappeared from the "programs enrolled" subform and all of the related services have disappeared from the "services" subform for a period of 3 months of data entry. However, other programs and services for this record did not disappear. Questions: Is the disappearance of data a common Access 2003 problem? Are there tests in place that can be run to see if data is disappearing and catch that data? If so, what are they? If there is specific code involved, what is it? What can be done to prevent the disappearing of data (other than using a different database)?

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Duplicate Symbol Linker Error (C++ help)

    - by Vash265
    Hi. I'm learning some CSP (constraint satisfaction) theory stuff right now, and am using this library to parse XML files. I'm using Xcode as an IDE. My program compiles fine, but when it goes to link the files, I get a duplicate symbol error with the XMLParser_libxml2.hh file. My files are separated as such: A class header file that includes the XMLParser file above A class implementation file that include the class header file A main file that includes the class header file The duplicate symbol is occurring in main.o and classfile.o, but as far as I can tell, I'm not actually adding that .hh file twice. Full error: ld: duplicate symbol bool CSPXMLParser::UTF8String::to<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) constin /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/dStructFill.o and /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/main.o Copying the implementation of the class into the main file and taking the class implementation file out of the compilation target alleviates the error, but it's a disorganized mess this way, and I'll be adding more classes very soon (and it would be nice to have them in separate files). As I've come to understand it, this is caused by the file (XMLParser_libxml2.hh) having both the class and function definition and implementation in one file (and it seems as though this might have been necessary due to the use of templates in that 'header' file). Any ideas on how to get around sticking all my class files in my main.cpp? (I've tried ifdefs, they don't work).

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • PHP : If...Else...Query

    - by Rachel
    I am executing this statement under while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) Now what I want in else loop is to throw exception only for data value which did not satisfy the if condition. Right now am displaying the complete row as I am not sure how to throw exception only for data which does not satisfy the value. Code if ((strtotime($data[11]) &&strtotime($data[12])&&strtotime($data[16]))!==FALSE && ctype_digit($data[0]) && ctype_alnum($data[1]) && ctype_digit($data[2]) && ctype_alnum($data[3]) && ctype_alnum($data[4]) && ctype_alnum($data[5]) && ctype_alnum($data[6]) && ctype_alnum($data[7]) && ctype_alnum($data[8]) && $this->_is_valid($data[9]) && ctype_digit($data[10]) && ctype_digit($data[13]) && $this->_is_valid($data[14])) { //Some Logic } else { throw new Exception ("Data {$data[0], $data[1], $data[2], $data[3], $data[4], $data[5], $data[6], $data[7], $data[8], $data[9], $data[10], $data[11], $data[12], $data[13], $data[14], $data[16]} is not in valid format"); } Guidance would be highly appreciated as to how can I throw exception only for data which did not satisfy the if value.

    Read the article

  • Entityframework duplicate record on second insert

    - by Delysid
    I am building an application for recipe/meal planning, and i have come across a problem i cant seem to figure out. i have a table for units of measure, where i keep the used units in, i only want unique units in here (for grocery list calculation and so forth) but if i use a unit from the table on a recipe, the first time it is okay, nothing is inserted in units of measure, but the second time i get a "duplicate". i suspect it has something to do with entitykey, because the primary key is identity column on the sql server (2008 r2) for some reason it works to change the objectstate on some objects (courses, see code) and that does not generate a duplicate, but that does not work on the unit of measure my insert methods looks like this : public recipe Create(recipe recipe) { using (RecipeDataContext ctx = new RecipeDataContext()) { foreach (recipe_ingredient rec_ing in recipe.recipe_ingredient) { if (rec_ing.ingredient.ingredient_id == 0) { ingredient ing = (from _ing in ctx.ingredients where _ing.name == rec_ing.ingredient.name select _ing).FirstOrDefault(); if (ing != null) { rec_ing.ingredient_id = ing.ingredient_id; rec_ing.ingredient = null; } } if (rec_ing.unit_of_measure.unit_of_measure_id == 0) { unit_of_measure _uom = (from dbUom in ctx.unit_of_measure where dbUom.unit == rec_ing.unit_of_measure.unit select dbUom).FirstOrDefault(); if (_uom != null) { rec_ing.unit_of_measure_id = _uom.unit_of_measure_id; rec_ing.unit_of_measure = null; } } ctx.Recipes.AddObject(recipe); //for some reason it works to change object state of this, and not generate a duplicate ctx.ObjectStateManager.ChangeObjectState(recipe.courses[0], EntityState.Unchanged); } ctx.SaveChanges(); } return recipe; } My datamodel looks like this : http://i.imgur.com/NMwZv.png

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from Database – Database Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • BIND - why duplicate nameserver entries (@ and *)?

    - by user27465
    I had to manually tweak my DNS service providers BIND file. BIND file, created by professional hosting company, before: $ORIGIN mycoolsite.com. $TTL 300 @ SOA ns1.cheapreg.com. registry.cheapreg.com. ( ... ) @ IN 3600 NS ns1.cheapreg.com. @ IN 3600 NS ns2.cheapreg.com. @ IN 3600 A 199.9.99.85 @ IN 3600 A 199.9.99.86 * IN 3600 A 199.9.99.85 * IN 3600 A 199.9.99.86 www IN 3600 A 199.9.99.85 www IN 3600 A 199.9.99.86 BIND file, created by layman, after: $ORIGIN mycoolsite.com. $TTL 300 @ SOA ns1.cheapreg.com. registry.cheapreg.com. ( ... ) @ IN 3600 NS ns1.cheapreg.com. @ IN 3600 NS ns2.cheapreg.com. * IN 3600 A 219.94.116.50 * IN 3600 A 219.94.116.51 * IN 3600 A 219.94.116.52 The difference is that the "pro"-file has duplicated the nameserver entries, once for @, and once for *, and I haven't. Any reason I should also duplicate nameserver entries (@ and *) ?

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • Data Governance (Veri Yönetisimi)

    - by Arda Eralp
    Data governance,veri ile ilgili islemler için bir sorumluluklar sistemidir. Bu sistemin temelini ise politikalar, standartlar ve prosedürler olusturur. Sistem politikalar, standartlar ve prosedürler sayesinde verinin ne zaman, hangi sartlar altinda, hangi eylemlerde, hangi yöntemler ile kimler tarafindan kullanilacagina karar verir. Sistemin kurumda basarili bir sekilde islemesi için öncelikle kurumda farkindalik saglanmasi gereklidir. Farkindalik saglandiktan sonra ise kurum governance ve mimari kültürünü benimsemelidir. Ancak bu sartlar altinda sistem basarili bir sekilde isleyebilecektir. Bu sebeplerden dolayidir ki data governance kisa bir süreç degil, aksine kurum varligini sürdürdügü sürece isleyecek olan bir süreçtir. Bu durum bize data governance in bir proje degil bir program oldugunu açiklamaktadir. Programin baslangicinda kurumun ihtiyaçlarinin netlesmesi ve farkindaligin saglanmasi temeldir. Hedef kitle ise, veri ile dogrudan ve ya dolayli olarak iliski içerisinde olan herkesdir. Bu sebeple programin baslangicinda hedef kitleyi içeren ekipler ile toplantilar düzenlenecektir. Bu toplantilar sayesinde hem farkindalik saglanacak hemde ekiplerin ihtiyaçlari birebir ekipler tarafindan aktarilarak netlesecektir. Hedef kitlenin ihtiyaçlari netlestirildikten sonra ise devamli isleyecek olan bu sürecin planlamasi yapilacaktir. Bu sürecin planlanmasinda ihtiyaçlarin önceliklendirilmesi gerekmektedir. Sebebi ise her ekibin ihtiyaçlarinin farkli olabilecegi ve bütün ihtiyaçlara ayni anda karsilik verilemeyebileceginin öngörülmesidir. Bu öngörünün temeli ise ekiplerin ihtiyaçlarinin birbirleriyle olan baglantisidir. Süreç planlamasinda ihtiyaçlarin önceliklendirilmesinin ardindan kurumun büyüklügünün gözönünde bulundurulmasi gerekmedir. Kurumun büyüklügünün önemi ise eger kurum bir bütün olarak ayni anda govern edilemeyecek kadar büyük ise ihtiyaçlari öncelikli olarak bulunan ekipler ile govern edilmesine baslanarak sürecin belirli bir hiz ile bütün kurumda isler hale getirilmesini saglamaktir. Ihtiyaçlar belirlendikten ve ilgili ekipler seçildikten sonra artik programin planlanmasina geçilebilecek. Programin planlama asamasinda öncelikli olarak sürecin asamalarini kontrol edecek ve süreç kurum içerisinde isleyise geçtiginde kontrolü saglayacak olan Data Governance Office in planlanmasidir. Office in planlanmasiyla birlikte süreçteki roller ve bu rollerin sorumluluklari belirlenecektir. Planlama asamasinda Data governance office, roller ve sorumluluklar, güvenlik ve veri saklanan sistemler ele alinacak konulardir. Planlama asamasi tamamlandiginda ise belirlenen ekipler ve ihtiyaçlar dogrultusunda programin isleyis asamasina geçilebilecektir. Isleyis kisminda ekibin ihtiyaçlari dogrultusunda güvenlik konusunda ve veri saklanan sistemler üzerinde çalismalar yapilacaktir. Bu yapilan çalismalar bir süreç olarak dökümante edilecek ve süreç sona erdiginde baska bir ekiple baska bir ihtiyaç dogrultusunda çalisma yapilarak ayni süreç isletilecek ve böylece kurum içesinde ilgili süreçte standartlasma saglanacaktir. Güvenlik konusunda verinin erisim güvenligi ve kullanim güvenligi ele alinacaktir. Veri saklanan sistemler üzerindeki çalismalar ise saklanan sistemlerin program dahilinde belirlenen standartlar ile olusturulmasi ve yönetilmesi saglanacaktir. Isleyis kisminin ardindan ise programin izleme kismina geçilecektir. Bu kisimda artik Data Governance Office olusmus, politikalar, standartlar ve prosedürler belirlenmistir. Ve Data Governance Office çalisanlari rolleri ve sorumluluklari dahilinde programin isleyisini izleyecek ve gerek gördügünde politikalar standartlar ve prosedürler üzerinde degisiklikler yapacaklardir.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • SMTP Client implementation [on hold]

    - by orif
    I'm implementing SMTP client. What should the client do once it already sent the "." at the end of the mail, but didn't receive "250 Ok"? This is how the conversation between the client and server look like: Server Response: 220 www.sample.com ESMTP Postfix Client Sending : HELO domain.com Server Response: 250 Hello domain.com Client Sending : MAIL FROM: <[email protected]> Server Response: 250 Ok Client Sending : RCPT TO: <[email protected]> Server Response: 250 Ok Client Sending : DATA Server Response: 354 End data with <CR><LF>.<CR><LF> Client Sending : Subject: Example Message Client Sending : From: [email protected] Client Sending : To: [email protected] Client Sending : Client Sending : TEST MAIL Client Sending : Client Sending : . Server Response: 250 Ok: queued as 23411 Client Sending : QUIT I'm not sure what should I do if the client sends "." and doesn't receive the 250 Ok - because of possible network error. Was the "." sent or not? Should the client resend the mail - and - maybe - duplicate the item, or not - and risk in losing an important mail item? Thank you.

    Read the article

  • A duplicate name has been detected on the TCP network

    - by MSedm
    When I installed my domain controller and DNS, I had 2 NIC on the server. Both NIC has its own IP address. NICs are not teamed, they are seperate and ip address are in the same subnet. Both IP address are now registered in the DNS. i found them in Forward and reverse lookup zone. Everything working ok except the following error in the event log. "A duplicate name has been detected on the TCP network......" Now I have realized that this is because of the second NIC. My question is if i disable the second NIC, what happen to those DNS record assiciated with the second ip address? How do I remove all the DNS recored for the disabled NIC? There are A record, some record with the name (same as parent folder), PTR record and may be more. How do i disable second NIC and remove all the associated DNS recoreds? Please help.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >