Search Results

Search found 59060 results on 2363 pages for 'dummy data'.

Page 130/2363 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Which kind of changes can't I do with lightweight migration in Core Data?

    - by dontWatchMyProfile
    I recently tried a lot of different stuff with lightweight migration. These all work: 1) Rename attributes (with renaming identifier specified) 2) Add attributes 3) Add new entity + new attribute + inverse relationship to an already existing entity 4) remove existing entity + relationships to that entity = It almost looks like just about anything can be handled with LM. Did I miss something? In which cases am I getting into trouble and need an some more complex approach?

    Read the article

  • Most performant way to check how many objects are referenced by an to-many relationship in Core Data

    - by dontWatchMyProfile
    Lets say I have an employees relationship in an Company entity, and it's to-many. And they're really many. Apple in 100 years, with 1.258.500.073 employees. Could I simply do something like NSInteger numEmployees = [apple.employees count]; without firing 1.258.500.073 faults? (Well, in 100 years, the iPhone will easily handle so many objects, for sure...but anyways)

    Read the article

  • servlet ArrayList and HashMap problem witch result

    - by nonameplum
    Hi, I have that code List<Map<String, Object>> data = new ArrayList<Map<String, Object>>(); Map<String, Object> item = new HashMap<String, Object>(); data.clear(); item.clear(); int i = 0; while (i < 5){    item.put("id", i);    i++;    out.println("id: " + item.get("id"));    out.println("--------------------------");    data.add(item); } for(i=0 ; i<5 ; i++){    out.println("print data[" + i + "]" + data.get(i)); } Result of that is: id: 0 -------------------------- id: 1 -------------------------- id: 2 -------------------------- id: 3 -------------------------- id: 4 -------------------------- print data[0]{id=4} print data[1]{id=4} print data[2]{id=4} print data[3]{id=4} print data[4]{id=4} Why only last element is stored?

    Read the article

  • NSPredicate (Core Data fetch) to filter on an attribute value being present in a supplied set (list)

    - by starbaseweb
    I'm trying to create a fetch predicate that is the analog to the SQL "IN" statement, and the syntax to do so with NSPredicate escapes me. Here's what I have so far (the relevant excerpt from my fetching routine): NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; NSEntityDescription *entity = [NSEntityDescription entityForName: @"BodyPartCategory" inManagedObjectContext:_context]; [request setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(name IN %@)", [RPBodyPartCategory defaultBodyPartCategoryNames]]; [request setPredicate:predicate]; The entity "BodyPartCategory" has a string attribute "name". I have a list of names (just NSString objects) in an NSArray as returned by: [RPBodyPartCategory defaultBodyPartCategoryNames] So let's say that array has string such as {@"Liver", @"Kidney", @"Thyroid"} ... etc. I want to fetch all 'BodyPartCategory' instances whose name attribute matches one of the strings in the set provided (technically NSArray but I can make it an NSSet). In SQL, this would be something like: SELECT * FROM BodyPartCategories WHERE name IN ('Liver', 'Kidney', 'Thyroid') I've gone through various portions of the Predicate Programming Guide, but I don't see this simple use case covered. Pointers/help much appreciated!

    Read the article

  • Write a program for a report derived from the data in the data file JEWELRY. The data is to be input

    - by Taylor
    here is the JEWELRY file 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant this is the program i have used #include <iostream> #include <string> #include <iomanip> #include <fstream> using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile>> product[x].itemCode; inFile>> product[x].name; inFile>> product[x].size; inFile>> product[x].amount; inFile>> product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; x<count;x++) { if (product[x].name>product[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main

    Read the article

  • What's the consequence when Core Data detects an optimistic locking failure when trying to save?

    - by dontWatchMyProfile
    I get it: When a managed object context saves, the snapshots of all edited objects are compared against the values in the persistent store to see if the PS has changed since the snapshot was made. If it did change, then there's a conflict and optimistic locking failed, according to Apple. But now, what's the consequence of this? What happens next? What are my options in this case?

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Accessing Server-Side Data from Client Script: Using Ajax Web Services, Script References, and jQuery

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques. In a nutshell, the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. The server processes this request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Typically, the information exchanged between the browser and server is serialized into JSON, an open, text-based serialization format that is both human-readable and platform independent. Adding such targeted, lightweight Ajax capabilities to your ASP.NET website requires two steps: first, you must create some mechanism on the server that accepts requests from client-side script and returns a JSON payload in response; second, you need to write JavaScript in your ASP.NET page to make an HTTP request to this service you created and to work with the returned results. This article series examines a variety of techniques for implementing such scenarios. In Part 1 we used an ASP.NET page and the JavaScriptSerializer class to create a server-side service. This service was called from the browser using the free, open-source jQuery JavaScript library. This article continues our examination of techniques for implementing lightweight Ajax scenarios in an ASP.NET website. Specifically, it examines how to create ASP.NET Ajax Web Services on the server-side and how to use both the ASP.NET Ajax Library and jQuery to consume them from the client-side. Read on to learn more! Read More >

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Changing Endpoint URL for a Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    When you move your application from Development to Production, there is more often then not, a need to change the web service endpoint URL in your ADF application. If you are using a Web Service Data Control(WSDC), you can do this in more than one ways. The following example illustrates how this can be done.At Design TimeIf the application workspace is in your control, you can quickly do this by updating the definition in DataControl.dcx file:Along with this, you will also need to change the endpoint in connections.xml. So invoke the Edit Connections dialog: Then, change the endpoint URL.At DeploymentAnother way to change is changing the endpoint at the ear level, at deployment. So when you select Deploy -> Application Server at the Application level, it will bring up a Deployment Configuration dialog, in which you can edit the WSDL URL:Also, change the Port URL:At Post DeploymentIf your need to change this post deployment, you can do it through Oracle Enterprise Manager. But for this, your application needs to be configured with a writable MDS repository. It is recommended you use a Database MDS store during deployment. So have your application configured (by having an entry in adf-config.xml) and server configured (by having a MDS store registered). Once done, you can configure the ADF Connection in EM for this application:Change the WSDL location here on 'Edit':Also, change the Port using Advance Connection Configuration:Change the Endpoint Address here:Apply Changes and you are done!

    Read the article

  • Twitter status id conundrum

    - by jamiet
    I have an interest, a slightly perverse one some might say, in using online services and trying to figure out what the underlying (logical) data model is and in this day and age Twitter is one that lends itself very well to scrutiny. Consider this recent tweet of mine: The URL that enables you to see that tweet is http://twitter.com/jamiet/status/12154647354. We can interpret that URL to mean "a tweet by jamiet with an id of 12154647354" and hence we might further assume that the unique identifier for the tweet is {jamiet,12154647354}. However, its well-known that Twitter gives each status a unique ID regardless of who tweeted it so we might expect we could reach that tweet just by using a URL of http://twitter.com/status/12154647354 however (at the time of writing) that only redirects to Twitter's homepage. That seems strange to me especially given that we can use Twitter's API to access information about that tweet using only the id of the status. Witness http://api.twitter.com/1/statuses/show/12154647354.xml: [We can also access a JSON version of that information using http://api.twitter.com/1/statuses/show/12154647354.json] I'm puzzled as to why a tweet can't be accessed using on the main twitter website using the id alone. Anyone have any suggestions? @jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Plan Cache and Data Cache in Memory

    - by pinaldave
    I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post. Plan Cache in Memory USE AdventureWorks GO SELECT [text], cp.size_in_bytes, plan_handle FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) WHERE cp.cacheobjtype = N'Compiled Plan' ORDER BY cp.size_in_bytes DESC GO Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script Data Cache in Memory USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.TYPE = 1 OR au.TYPE = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.TYPE = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • SQL Authority News – Download SQL Server Data Type Conversion Chart

    - by pinaldave
    Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. Implicit Conversion: Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit Conversions: Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. Implicit Conversion: Explicit Conversion: You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. Download SQL Server Data Type Conversion Chart Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Call for Abstracts Now Open for Microsoft ASP.NET Connections (Closing April 26)

    - by plitwin
    We are putting out a call for abstracts to present at the Fall 2010 Microsoft ASP.NET Connections conference in Las Vegas, Nov 9-13 2009. The due date for submissions is April 26, 2010. For submitting sessions, please use this URL: http://www.deeptraining.com/devconnections/abstracts Please keep the abstracts under 200 words each and in one paragraph. No bulleted items and line breaks, and please use a spell-checker. Do not email abstracts, you need to use the web-based tool to submit them. Please submit at least 3 abstracts, but it would help your chances of being selected if you submitted 5 or more abstracts. Also, you are encouraged to suggest all-day pre or post conference workshops as well. We need to finalize the conference content and the tracks layout in just a few short weeks, so we need your abstracts by April 26th. No exceptions will be granted on late submissions! Topics of interest include (but are not limited to):* ASP.NET Webforms* ASP.NET AJAX* ASP.NET MVC* Dynamic Data* Anything else related to ASP.NET For Fall 2010, we are having a seperate Silverlight conference where you can submit abstracts for Silverlight and Windows 7 Phone Development. In fact, you can use the same URL to submit sessions to Microsoft ASP.NET Connections, Silverlight Connections, Visual Studio Connections, or SQL Server Connections. The URL again is:http://www.deeptraining.com/devconnections/abstracts Please realize that while we want a lot of the new and the cool, it's also okay to propose sessions on the more mundane "real world" stuff as it pertains to ASP.NET. What you will get if selected:* $500 per regular conference talk.* Compensation for full-day workshops ranges from $500 for 1-20 attendees to $2500 for 200+ attendees.* Coach airfare and hotel stay paid by the conference.* Free admission to all of the co-located conferences* Speaker party* The adoration of attendees* etc. Your continued suport of Microsoft ASP.NET Connections and the other DevConnections conferences is appreciated. Good luck and thank you,Paul LitwinMicrosoft ASP.NET Conference Chair

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • SQL SERVER – Simple Installation of Master Data Services (MDS) and Sample Packages – Very Easy

    - by pinaldave
    I twitted recently about: ‘Installing #sql Server 2008 R2 – Master Data Services. Painless.‘ After doing so, I got quite a few emails from other users as to why I thought it was painless. The reason was very simple- I was able to install it rather quickly on my laptop without any issues. There were a few requests along with these emails sent to me, which regards to how to install MDS, as well sample databases. Please note that I am the admin of my machine and I installed this MDS as the admin as well. Talk to your network administrator to figure out the best suitable settings for better security of login users. Additionally, since MDS is only supported on a 64-bit machine, I had rebuilt my computer a week before with a 64-bit OS and 64-bit SQL Server to go with it. Here is a quick picture tour of the installation. First of all, go to your SQL Server 2008. Install self-extracted folder and find the .msi file for C:\1033_enu_lp\x64\setup\masterdataservices.msi. Once you clicked on the file, follow the image tour below. You can ask me any questions in case you are still confused with any of the steps of the installation. While searching the internet for a similar installation process, I have landed on the official blog of MDS team where they have many interesting posts about it. If any concept written on my posts are contradictory to the information on the official blog, I suggest that you should follow the advice of official blog. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Leverage Location Data with Maps in Your Apps

    - by stephen.garth
    Free Webinar: "Add Maps to Your Java Applications - The Easy Way" Wednesday May 26 at 9:00am Pacific Time Putting maps in your apps is a great way to put your apps on the map! Maps provide a location context that can trigger those "aha" moments leading to better business decisions. Tune into this free webinar to find out how easy it is to leverage spatial and location data, much of which is already in your Oracle Database. NAVTEQ's Dan Abugov and Oracle's Shay Shmeltzer combine their considerable experience with Oracle Spatial and Java application development to demonstrate how you can quickly and easily add maps to your Java applications, leveraging Oracle Spatial 11g, Oracle Fusion Middleware MapViewer, Oracle JDeveloper and ADF 11g. Register here. Learn more about Oracle spatial and location technology var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Announcing Oracle-Demantra 12.2.1 Release

    - by user702295
    We are excited to announce Oracle Demantra 12.2.1 is now available for new and existing customers. All customers who are not incorporating Demantra with other VCP products are welcome to upgrade without any restrictions. Customers who are using Demantra in conjunction with VCP products will need to upgrade VCP to 12.2.1 which requires application and participation in Oracle E-Business Suite early adopter program. Demantra 12.2.1 includes a wide array of new features driven by customer requirements and needs. Key features include: ·       Streamlined import and export from Microsoft Excel ·       Support Gregorian Month data aggregation in weekly system ·       Multilanguage support for eleven languages ·       Promotion Calendar Optimization ·       Enhanced integration with Advanced Planning Command Center (VCP 12.2.1 Required) Demantra 12.2.1 will work with JD Edwards EnterpriseOne 9.1 using the AIA 11.4 for the Value Chain Planning Base Integration Pack. Demantra 12.2.1 will only work with VCP 12.2.1. Demantra 12.2.1 and VCP 12.2.1 will work with EBS 12.1.3 or EBS 12.2.1.

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >