Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 394/1212 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • SQL SERVER – Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue

    - by pinaldave
    There is an old quote “A Picture is Worth a Thousand Words”. I believe this quote immensely. Quite often I get phone calls that something is not working if I can help. My reaction is in most of the cases, I need to know more, send me exact error or a screenshot. Until and unless I see the error or reproduce the scenario myself I prefer not to comment. Yesterday I got a similar phone call from an old friend, where he was not sure what is going on. Here is what he said. “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it? “ My immediate reaction was he was facing security and permission issue. However, to make the same recommendation I suggested that he send me a screenshot of his own SSMS and his friend’s SSMS. After carefully looking at both the screenshots, I was very confident about the issue and we were able to resolve the issue. Let us reproduce the same scenario and many there is some learning for us. Issue: User not able to see user created objects First let us see the image of my friend’s SSMS screen. (Recreated on my machine) Now let us see my friend’s colleague SSMS screen. (Recreated on my machine) You can see that my friend could not see the user tables but his colleague was able to do the same for sure. Now I believed it was a permissions issue. Further to this I asked him to send me another image where I can see the various permissions of the user in the database. My friends screen My friends colleagues screen This indeed proved that my friend did not have access to the AdventureWorks database and because of the same he was not able to access the database. He did have public access which means he will have similar rights as guest access. However, their SQL Server had followed my earlier advise on having limited access for guest access, which means he was not able to see any user created objects. My next question was to validate what kind of access my friend’s colleague had. He replied that the colleague is the admin of the server. I suggested that if my friend was suppose to have admin access to the database, he should request of having admin access to his colleague. My friend promptly asked for the same to his colleague and on following screen he added him as an admin. You can do the same using following T-SQL script as well. USE [AdventureWorks2012] GO ALTER ROLE [db_owner] ADD MEMBER [testguest] GO Once my friend was admin he was able to access all the user objects just like he was expecting. Please note, this complete exercise was done on a development server. One should not play around with security on live or production server. Security is such an issue, which should be left with only senior administrator of the server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • Criminals and Other Illegal Characters

    - by Most Valuable Yak (Rob Volk)
    SQLTeam's favorite Slovenian blogger Mladen (b | t) had an interesting question on Twitter: http://www.twitter.com/MladenPrajdic/status/347057950470307841 I liked Kendal Van Dyke's (b | t) reply: http://twitter.com/SQLDBA/status/347058908801667072 And he was right!  This is one of those pretty-useless-but-sounds-interesting propositions that I've based all my presentations on, and most of my blog posts. If you read all the replies you'll see a lot of good suggestions.  I particularly like Aaron Bertrand's (b | t) idea of going into the Unicode character set, since there are over 65,000 characters available.  But how to find an illegal character?  Detective work? I'm working on the premise that if SQL Server will reject it as a name it would throw an error.  So all we have to do is generate all Unicode characters, rename a database with that character, and catch any errors. It turns out that dynamic SQL can lend a hand here: IF DB_ID(N'a') IS NULL CREATE DATABASE [a]; DECLARE @c INT=1, @sql NVARCHAR(MAX)=N'', @err NVARCHAR(MAX)=N''; WHILE @c<65536 BEGIN BEGIN TRY SET @sql=N'alter database ' + QUOTENAME(CASE WHEN @c=1 THEN N'a' ELSE NCHAR(@c-1) END) + N' modify name=' + QUOTENAME(NCHAR(@c)); RAISERROR(N'*** Trying %d',10,1,@c) WITH NOWAIT; EXEC(@sql); SET @c+=1; END TRY BEGIN CATCH SET @err=ERROR_MESSAGE(); RAISERROR(N'Ooops - %d - %s',10,1,@c,@err) WITH NOWAIT; BREAK; END CATCH END SET @sql=N'alter database ' + QUOTENAME(NCHAR(@c-1)) + N' modify name=[a]'; EXEC(@sql); The script creates a dummy database "a" if it doesn't already exist, and only tests single characters as a database name.  If you have databases with single character names then you shouldn't run this on that server. It takes a few minutes to run, but if you do you'll see that no errors are thrown for any of the characters.  It seems that SQL Server will accept any character, no matter where they're from.  (Well, there's one, but I won't tell you which. Actually there's 2, but one of them requires some deep existential thinking.) The output is also interesting, as quite a few codes do some weird things there.  I'm pretty sure it's due to the font used in SSMS for the messages output window, not all characters are available.  If you run it using the SQLCMD utility, and use the -o switch to output to a file, and -u for Unicode output, you can open the file in Notepad or another text editor and see the whole thing. I'm not sure what character I'd recommend to answer Mladen's question.  I think the standard tab (ASCII 9) is fine.  There's also several specific separator characters in the original ASCII character set (decimal 28-31). But of all the choices available in Unicode whitespace, I think my favorite would be the Mongolian Vowel Separator.  Or maybe the zero-width space. (that'll be fun to print!)  And since this is Mladen we're talking about, here's a good selection of "intriguing" characters he could use.

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • how to restrict a jsp hit counter?

    - by user261002
    I am writing a hot counter in jsp, for my coursework. I have write the code, and there is not error and its working, but the problem is that: if the user have open the website and try to user different page, when ever that the user get back to the home page the counter still is adding a number , how can I restrict this part??? shall restrict it with session?? this is my code : The current count for the counter bean is: <% counter.saveCount(); int _numberofvisitors=counter.getVisitorsNumber(); out.println(_numberofvisitors); % Bean: package counter; import java.sql.*; import java.sql.SQLException; public class CounterBean implements java.io.Serializable { int coun = 0; public CounterBean() { database.DatabaseManager.getInstance().getDatabaseConnection(); } public int getCoun() { return this.coun; } public void setCoun(int coun) { this.coun += coun; } public boolean saveCount() { boolean _save = false; database.SQLUpdateStatement sqlupdate = new database.SQLUpdateStatement("counter", "hitcounter"); sqlupdate.addColumn("hitcounter", getCoun()); if (sqlupdate.Execute()) { _save = true; } return _save; } public int getVisitorsNumber() throws SQLException { int numberOfVisitors = 0; if (database.DatabaseManager.getInstance().connectionOK()) { database.SQLSelectStatement sqlselect = new database.SQLSelectStatement("counter", "hitcounter", "0"); ResultSet _userExist = sqlselect.executeWithNoCondition(); if (_userExist.next()) { numberOfVisitors = _userExist.getInt("hitcounter"); } } return numberOfVisitors; } }

    Read the article

  • sqlite3 no insert is done after --> insert into --> SQLITE_DONE

    - by Fra
    Hi all, I'm trying to insert some data in a table using sqlite3 on an iphone... All the error codes I get from the various steps indicate that the operation should have been successful, but in fact, the table remains empty...I think I'm missing something... here the code: sqlite3 *database = nil; NSString *dbPath = [[[ NSBundle mainBundle ] resourcePath ] stringByAppendingPathComponent:@"placemarks.sql"]; if(sqlite3_open([dbPath UTF8String], &database) == SQLITE_OK){ sqlite3_stmt *insert_statement = nil; //where int pk, varchar name,varchar description, blob picture static char *sql = "INSERT INTO placemarks (pk, name, description, picture) VALUES(99,'nnnooooo','dddooooo', '');"; if (sqlite3_prepare_v2(database, sql, -1, &insert_statement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error: failed to prepare statement with message '%s'.", sqlite3_errmsg(database)); } int success = sqlite3_step(insert_statement); int finalized = sqlite3_finalize(insert_statement); NSLog(@"success: %i finalized: %i",success, finalized); NSAssert1(101, @"Error: failed to insert into the database with message '%s'.", sqlite3_errmsg(database)); sqlit3_step returns 101, so SQLITE_DONE, which should be ok.. If I execute the sql statement in the command line it works properly... anyone has an idea? could it be that there's a problem in writing the placemarks.sql because it's in the resources folder? rgds Fra

    Read the article

  • UIImagePickerController causing sqlite errors (Simulator only)

    - by MrHen
    When I display a UIImagePickerController in a UIPopoverController the console outputs the following warnings: sqlite error 8 [attempt to write a readonly database] sqlite error 8 [attempt to write a readonly database] sqlite error 8 [attempt to write a readonly database] sqlite error 1 [no such column: duration] sqlite error 1 [no such column: duration] sqlite error 8 [attempt to write a readonly database] sqlite error 8 [attempt to write a readonly database] sqlite error 8 [attempt to write a readonly database] Here is the code that displays the picker: - (void)openImagePickerController:(id)sender { //TODO only open one at a time UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init]; //imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera; imagePicker.delegate = self; [self presentViewController:imagePicker sender:sender animated:YES modal:YES]; [imagePicker release]; } - (void)presentViewController:(UIViewController *)vc sender:(id)sender animated:(BOOL)animated modal:(BOOL)modal { BOOL useNavController = NO; //if((void *)UI_USER_INTERFACE_IDIOM() != NULL && if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { //are we on an iPad? Class popoverClass = NSClassFromString(@"UIPopoverController"); if(!popoverClass) { useNavController = YES; } else { if(currentPopover == nil || currentPopover.contentViewController != vc) { if(currentPopover != nil) { [currentPopover dismissPopoverAnimated:animated]; [currentPopover.delegate popoverControllerDidDismissPopover:currentPopover]; } UIPopoverController *popover = [[popoverClass alloc] initWithContentViewController:vc]; currentPopover = popover; currentPopover.delegate = self; [popover presentPopoverFromBarButtonItem:sender permittedArrowDirections:UIPopoverArrowDirectionAny animated:animated]; } } //[aPopover release]; } else { useNavController = YES; } if(useNavController) { if(modal) { [self presentModalViewController:vc animated:animated]; } else { [self.view addSubview:vc.view]; } } } Stepping with the debugger shows that the errors occur after [UIPopoverController presentPopoverFromBarButtonItem:permittedArrowDirections:animated: runs. Of note: The program continues running just fine No noticeable weirdness in UIImagePickerController's behavior when selecting an image or canceling Happens in Simulator 3.2 Unable to reproduce on iPad running 3.2 Is this just another simulator bug?

    Read the article

  • SqlBackup from my client app

    - by Robbert Dam
    I have an client app that runs on several machines, and connects to a single SQL database. On the client app, the user has the possiblity to use a local SQL (CE) database or connect to a remote SQL database. There is also a box where a backup location can be set. For the backup of the remote SQL database I use the following code: var bdi = new BackupDeviceItem(backupFile, DeviceType.File); var backup = new Backup { Database = "AppDb", Initialize = true }; backup.Devices.Add(bdi); var server = new Server(connection); backup.SqlBackup(server); When developing my application, I wasn't aware that the given backupFile is written on the machine where the SQL server runs! And not on the client machine. What I want is "dump" all the data from my database to a local file. (Why? Because users may enter a network location in the "backup location" box, and backing up SQL immediately to a network location fails. So need to write local first and then copy to the network in that case.) I there an alternative to the method above?

    Read the article

  • Warning: passing Argument 1 of "sqlite3_bind_text" from incompatible pointer type" What should I do

    - by Amarpreet
    Hi All, i am pretty new in iphone development. I have created one function to insert data into database. The code compiles successfully. But when comes to statement sqlite3_bind_text(sqlStatement, 1, [s UTF8String], -1, SQLITE_TRANSIENT); it does not do anything but hangs AND in warning it says "passing Argument 1 of "sqlite3_bind_text" from incompatible pointer type"" for all statements in Red colour The same code i am using to fetch the data from database and its working on other viewController. Below in the code. Its pretty straightforward. Please help guys. -(void) SaveData: (NSString *)FirstName: (NSString *)LastName: (NSString *)State: (NSString *)Street: (NSString *)PostCode { databaseName = @"Zen.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDire ctory, NSUserDomainMask,YES); NSString *documentsDir=[documentPaths objectAtIndex:0]; databasePath=[documentsDir stringByAppendingPathComponent:databaseName]; sqlite3 *database; if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char *sqlStatement = "insert into customers (FirstName, LastName, State, Street, PostCode) values(?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL); sqlite3_bind_text(sqlStatement, 1, [FirstName UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(sqlStatement,2,[LastName UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(sqlStatement,3,[State UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(sqlStatement,4,[Street UTF8String],-1,SQLITE_TRANSIENT); sqlite3_bind_text(sqlStatement,5,[PostCode UTF8String],-1,SQLITE_TRANSIENT); sqlite3_step(sqlStatement); sqlite3_finalize(compiledStatement); } sqlite3_close(database); }

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • Convert ADO.Net EF Connection String To Be SQL Azure Cloud Connection String Compatible!?

    - by Goober
    The Scenario I have written a Silverlight 3 Application that uses an SQL Server database. I'm moving the application onto the Cloud (Azure Platform). In order to do this I have had to setup my database on SQL Azure. I am using the ADO.Net Entity Framework to model my database. I have got the application running on the cloud, but I cannot get it to connect to the database. Below is the original localhost connection string, followed by the SQL Azure connection string that isn't working. The application itself runs fine, but fails when trying to retrieve data. The Original Localhost Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Data Source=localhost; Initial Catalog=InmarsatZenith; Integrated Security=True; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Converted SQL Azure Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Server=tcp:MYSERVER.ctp.database.windows.net; Database=InmarsatZenith; UserID=MYUSERID;Password=MYPASSWORD; Trusted_Connection=False; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Question Anyone know if this connection string for SQL Azure is correct? Help greatly appreciated.

    Read the article

  • Android : SQL Lite insertion or create table issue

    - by Ram
    Team, can anyone please help me to understand what could be the problem in the below snippet of code? It fails after insertion 2 and insertion 3 debug statements I have the contentValues in the Array list, I am iterating the arraylist and inserting the content values in to the database. Log.d("database","before insertion 1 "); liteDatabase = this.openOrCreateDatabase("Sales", MODE_PRIVATE, null); Log.d("database","before insertion 2 "); liteDatabase .execSQL("Create table activity ( ActivityId VARCHAR,Created VARCHAR,AMTaps_X VARCHAR,AMTemperature_X VARCHAR,AccountId VARCHAR,AccountLocation VARCHAR,AccountName VARCHAR,Classification VARCHAR,ActivityType VARCHAR,COTaps_X VARCHAR,COTemperature_X VARCHAR,Comment VARCHAR,ContactWorkPhone VARCHAR,CreatedByName VARCHAR,DSCycleNo_X VARCHAR,DSRouteNo_X VARCHAR,DSSequenceNo_X VARCHAR,Description VARCHAR,HETaps_X VARCHAR,HETemperature_X VARCHAR,Pro_Setup VARCHAR,LastUpdated VARCHAR,LastUpdatedBy VARCHAR,Licensee VARCHAR,MUTaps_X VARCHAR,MUTemperature_X VARCHAR,Objective VARCHAR,OwnerFirstName VARCHAR,OwnerLastName VARCHAR,PhoneNumber VARCHAR,Planned VARCHAR,PlannedCleanActualDt_X VARCHAR,PlannedCleanReason_X VARCHAR,PrimaryOwnedBy VARCHAR,Pro_Name VARCHAR,ServiceRepTerritory_X VARCHAR,ServiceRep_X VARCHAR,Status VARCHAR,Type VARCHAR,HEINDSTapAuditDate VARCHAR,HEINEmployeeType VARCHAR)"); Log.d("database","before insertion 3 "); int counter = 0; int size = arrayList.size(); for (counter = 0; counter < size; counter++) { ContentValues contentValues = (ContentValues) arrayList .get(counter); liteDatabase.insert("activity", "activityInfo", contentValues); Log.d("database", "Database insertion is done"); } }

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • Retrieve property from classpath inside POM

    - by Jeroen
    For my current project I want to integrate a maven plug-in for database migrations. For this plug-in to work, however, I have to obtain the database settings inside my POM. My database settings are currently placed inside a hibernate.properties file, positioned in a directory that is marked as maven resource. For a variety of reasons I do not want to duplicate my database configurations in both the pom and hibernate.properties. I'm aware that maven offers a "filtering" ability which makes it possible to specify the database settings as property inside my POM, and reference them inside my hibernate.properties as ${property_name}. But as I'm using multiple maven profiles, with different property resources, this is not a suitable solution. Instead I'd like my database configurations to be loaded from a property file inside my classpath (e.g. classpath:hibernate.properties), and use these properties in my migration plug-in configuration. I have already tried the org.codehaus.mojo » properties-maven-plugin, but this plug-in only accepts absolute locations. Is there a plug-in which can scan all my maven resources for a certain property?

    Read the article

  • How do I apply a "template" or "skeleton" of code in C# here?

    - by Scott Stafford
    In my business layer, I need many, many methods that follow the pattern: public BusinessClass PropertyName { get { if (this.m_LocallyCachedValue == null) { if (this.Record == null) { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.PropertyId); } else { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.Record.ForeignKeyName); } } return this.m_LocallyCachedValue; } } I am still learning C#, and I'm trying to figure out the best way to write this pattern once and add methods to each business layer class that follow this pattern with the proper types and variable names substituted. BusinessClass is a typename that must be substituted, and PropertyName, PropertyId, ForeignKeyName, and m_LocallyCachedValue are all variables that should be substituted for. Are attributes usable here? Do I need reflection? How do I write the skeleton I provided in one place and then just write a line or two containing the substitution parameters and get the pattern to propagate itself? EDIT: Modified my misleading title -- I am hoping to find a solution that doesn't involve code generation or copy/paste techniques, and rather to be able to write the skeleton of the code once in a base class in some form and have it be "instantiated" into lots of subclasses as the accessor for various properties. EDIT: Here is my solution, as suggested but left unimplemented by the chosen answerer. // I'll write many of these... public BusinessClass PropertyName { get { return GetSingleRelation(ref this.m_LocallyCachedValue, this.PropertyId, "ForeignKeyName"); } } // That all call this. public TBusinessClass GetSingleRelation<TBusinessClass>( ref TBusinessClass cachedField, int fieldId, string contextFieldName) { if (cachedField == null) { if (this.Record == null) { ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), typeof(int) }); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, fieldId }); } else { var obj = this.Record.GetType().GetProperty(objName).GetValue( this.Record, null); ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), obj.GetType()}); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, obj }); } } return cachedField; }

    Read the article

  • i am getting error like mysql_connect() acces denied for system@localhost(using password NO)

    - by user309381
    class MySQLDatabase { public $connection; function _construct() { $this->open_connection();} public function open_connection() {$this->connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if(!$this->connection){die("Database Connection Failed" . mysql_error());} else{$db_select = mysql_select_db(DB_NAME,$this->connection); if(!$db_select){die("Database Selection Failed" . mysql_error()); } }} public function close_connection({ if(isset($this->connection)){ mysql_close($this->connection); unset($this->connection);}} public function query(/*$sql*/){ $sql = "SELECT*FROM users where id = 1"; $result = mysql_query($sql); $this->confirm_query($result); //return $result;while( $found_user = mysql_fetch_assoc($result)) { echo $found_user ['username']; } } private function confirm_query($result) { if(!$result) { die("The Query has problem" . mysql_error()); } } } $database = new MySQLDatabase(); $database->open_connection(); $database->query(); $database->close_connection(); ?>

    Read the article

  • Let multiple highcharts charts appear automatically from mysql data

    - by martini1993
    I have the following problem. I want to make multiple Highcharts webcharts appear automatically based on the data from the database. Let's say we have the following database: ___________________________________________________________________ | | | | | | | | Year | Month | ID | Name User | Wins | Losses | |_______|___________|______|_______________|____________|__________| | 2013 1 21 Tony Stark 3 12 | | 2013 1 52 Bruce Wayne 5 4 | | 2013 1 76 Clark Kent 9 5 | |__________________________________________________________________| (This database is an example, there are a lot more rows in the real database.) And i have the following query: SELECT a.year AS year1, a.month AS month1, a.id AS id, a.name AS nameuser, a.wins AS wins, a.losses AS losses FROM Sales a WHERE a.month = 1 AND a.year = YEAR(NOW()) With this, it is very easy to hardcode a chart with Highcharts. But what I want is that there has to be a webchart per user. So instead of a single webchart with all the users in it, I want multiple charts next to each other based on the data from the database. So instead of this: http://jsfiddle.net/CWSb6/ I want this (But then next to each other): http://jsfiddle.net/DReMD/ It has to be generated automatically with php and mysql. So if there is a new user starting this month, and the new user is saved in the database, the page automatically displays the new user with the related web chart. I find this very hard to accomplish and I need some help to get to the right direction for the solution. Many thanks in advance! (Sorry for my bad english.)

    Read the article

  • MySQL FULLTEXT aggravation

    - by southof40
    Hi - I'm having problems with case-sensitivity in MySQL FULLTEXT searches. I've just followed the FULLTEXT example in the MySQL doco at http://dev.mysql.com/doc/refman/5.1/en/fulltext-boolean.html . I'll post it here for ease of reference ... CREATE TABLE articles ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, title VARCHAR(200), body TEXT, FULLTEXT (title,body) ); INSERT INTO articles (title,body) VALUES ('MySQL Tutorial','DBMS stands for DataBase ...'), ('How To Use MySQL Well','After you went through a ...'), ('Optimizing MySQL','In this tutorial we will show ...'), ('1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'), ('MySQL vs. YourSQL','In the following database comparison ...'), ('MySQL Security','When configured properly, MySQL ...'); SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); ... my problem is that the example shows that SELECT returning the first and fifth rows ('..DataBase..' and '..database..') but I only get one row ('database') ! The example doesn't demonstrate what collation the table in the example had but I have ended up with latin1_general_cs on the title and body columns of my example table. My version of MySQL is 5.1.39-log and the connection collation is utf8_unicode_ci . I'd be really grateful is someone could suggest why my experience differs from the example in the manual ! Be grateful for any advice.

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Asp.net renders string with wrong encoding, but PHP doesn't (MySQL)

    - by citronas
    I took over some old php application with MySQL as database. Inside the database, there are tables including content with localized strings (therefore containing special chars) Currently there is a PHP application accessing that database. My job is to create an ASP.net (C# codebehind) application that accesses that strings as well. That works, as far as encoding goes. If I try to access these strings, I do get a kind of encoding problem, like 'Ändern' and 'Prüfzeichen', but only in the ASP.net application. The PHP app sets utf-8 as charset and the strings are perfectly rendered. In the ASP.net application it's gibberish, regardless of the page encoding. In the MySQL database, the charset for the specified table 'translations' is set to 'latin --cp1252 West European' and collation to 'latin_swedish_ci'. I can't seem to figure out what PHP apparently does, and ASP.net does not. I traced the php code and could not find any sign of special encoding while getting a string from the database. The question is, how can I ensure correct encoding inside the ASP.net application without modifying the database, because big changes at the php code are not possible? Does anybody have a clue?

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >