Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 696/1252 | < Previous Page | 692 693 694 695 696 697 698 699 700 701 702 703  | Next Page >

  • Alter Dilemma : How to use to set Primary and other attributes.

    - by Rachel
    I have following table in database AND I need to alter it to below mentioned schema. Initially I was drop the current database and creating new one using the create but I am not supposed to do that and use ALTER but am not sure as to how can I use ALTER to add primary key and other constraints. Any Suggestions !!! Code Current: CREATE TABLE `details` ( `KEY` varchar(255) NOT NULL, `ID` bigint(20) NOT NULL, `CODE` varchar(255) NOT NULL, `C_ID` bigint(20) NOT NULL, `C_CODE` varchar(64) NOT NULL, `CCODE` varchar(255) NOT NULL, `TCODE` varchar(255) NOT NULL, `LCODE` varchar(255) NOT NULL, `CAMCODE` varchar(255) NOT NULL, `OFCODE` varchar(255) NOT NULL, `OFNAME` varchar(255) NOT NULL, `PRIORITY` bigint(20) NOT NULL, `STDATE` datetime NOT NULL, `ENDATE` datetime NOT NULL, `INT` varchar(255) NOT NULL, `PHONE` varchar(255) NOT NULL, `TV` varchar(255) NOT NULL, `MTV` varchar(255) NOT NULL, `TYPE` varchar(255) NOT NULL, `CREATED` datetime NOT NULL, `MAIN` varchar(255) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Desired: CREATE TABLE `details` ( `id` bigint(20) NOT NULL, `code` varchar(255) NOT NULL, `cid` bigint(20) NOT NULL, `ccode` varchar(64) NOT NULL, `c_code` varchar(255) NOT NULL, `tcode` varchar(255) NOT NULL, `lcode` varchar(255) NOT NULL, `camcode` varchar(255) NOT NULL, `ofcode` varchar(255) NOT NULL, `ofname` varchar(255) NOT NULL, `priority` bigint(20) NOT NULL, `stdate` datetime NOT NULL, `enddate` datetime NOT NULL, `list` varchar(255) NOT NULL, `name` varchar(255) NOT NULL, `created` datetime NOT NULL, `date` datetime NOT NULL, `ofshn` int(20) NOT NULL, `ofcl` int(20) NOT NULL, `ofr` int(20) NOT NULL, PRIMARY KEY (`code`,`ccode`,`list`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Thanks !!!

    Read the article

  • PHP returns invalid MySQL resource

    - by DeadMG
    $LDATE = '#' . $_REQUEST['LDateDay'] . '/' . $_REQUEST['LDateMonth'] . '/' . $_REQUEST['LDateYear'] . '#'; $RDATE = '#' . $_REQUEST['RDateDay'] . '/' . $_REQUEST['RDateMonth'] . '/' . $_REQUEST['RDateYear'] . '#'; include("../../sql.php"); $myconn2 = mysql_connect(/*removed*/, $username, $password); mysql_select_db(/*removed*/, $myconn2); $LSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$LDATE; $LFlights = mysql_query($LSQLRequest, $myconn2); $RSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$RDATE; $RFlights = mysql_query($RSQLRequest, $myconn2); Assuming that all the $_REQUESTs are valid numerical values for their appropriate fields in the day/month/year field, how can LFlights and RFlights be invalid? When I polled the whole database I got hundreds of results so I know that the database and connection data is fine, and the field DepartureDate exists too.

    Read the article

  • Special characters stripped by mySQL/PHP JSON

    - by Will Gill
    Hi, I have a simple PHP script to extract data from a mySQL database and encode it as JSON. The problem is that special characters (for example German ä or ß characters) are stripped from the JSON response. Everything after the first special character for any single field is just stripped. The fields are set to utf8_bin, and in phpMyAdmin the characters display correctly. The PHP script looks like this: <?php header("Content-type: application/json; charset=utf-8"); $con = mysql_connect('database', 'username', 'password'); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("sql01_5789willgil", $con); $sql="SELECT * FROM weightevent"; $result = mysql_query($sql); $row = mysql_fetch_array($result); $events = array(); while($row = mysql_fetch_array($result)) { $eventid = $row['eventid']; $userid = $row['userid']; $weight = $row['weight']; $sins = $row['sins']; $gooddeeds = $row['gooddeeds']; $date = $row['date']; $event = array("eventid"=>$eventid, "userid"=>$userid, "weight"=>$weight, "sins"=>$sins, "gooddeeds"=>$gooddeeds, "date"=>$date); array_push($events, $event); } $myJSON = json_encode($events); echo $myJSON; mysql_close($con); ?> Sample output: [{"eventid":"2","userid":"1","weight":"70.1","sins":"Weihnachtspl","gooddeeds":"situps! lots and lots of situps!","date":"2011-01-02"},{"eventid":"3","userid":"2","weight":"69.9","sins":"A second helping of pasta...","gooddeeds":"I ate lots of salad","date":"2011-01-01"}] -- in the first record the value for field 'sins' should be "Weihnachtsplätzchen". thanks very much!

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • Problems sorting by date

    - by Nathan
    I'm attempting to only display data added to my database 24 hours ago or less. However, for some reason, the code I've written isn't working, and both entries in my database, one from 1 hour ago, one from 2 days ago, show up. Is there something wrong with my equation? Thanks! public void UpdateValues() { double TotalCost = 0; double TotalEarned = 0; double TotalProfit = 0; double TotalHST = 0; for (int i = 0; i <= Program.TransactionList.Count - 1; i++) { DateTime Today = DateTime.Now; DateTime Jan2013 = DateTime.Parse("01-01-2013"); //Hours since Jan12013 int TodayHoursSince2013 = Convert.ToInt32(Math.Round(Today.Subtract(Jan2013).TotalHours)); //7 int ItemHoursSince2013 = Program.TransactionList[i].HoursSince2013; //Equals 7176, and 7130 if (ItemHoursSince2013 - TodayHoursSince2013 <= 24) { TotalCost += Program.TransactionList[i].TotalCost; TotalEarned += Program.TransactionList[i].TotalEarned; TotalProfit += Program.TransactionList[i].TotalEarned - Program.TransactionList[i].TotalCost; TotalHST += Program.TransactionList[i].TotalHST; } } label6.Text = "$" + String.Format("{0:0.00}", TotalCost); label7.Text = "$" + String.Format("{0:0.00}", TotalEarned); label8.Text = "$" + String.Format("{0:0.00}", TotalProfit); label10.Text = "$" + String.Format("{0:0.00}", TotalHST); }

    Read the article

  • How to determine one to many column name from entity type

    - by snicker
    I need a way to determine the name of the column used by NHibernate to join one-to-many collections from the collected entity's type. I need to be able to determine this at runtime. Here is an example: I have some entities: namespace Entities { public class Stable { public virtual int Id {get; set;} public virtual string StableName {get; set;} public virtual IList<Pony> Ponies { get; set; } } public class Dude { public virtual int Id { get; set; } public virtual string DudesName { get; set; } public virtual IList<Pony> PoniesThatBelongToDude { get; set; } } public class Pony { public virtual int Id {get; set;} public virtual string Name {get; set;} public virtual string Color { get; set; } } } I am using NHibernate to generate the database schema, which comes out looking like this: create table "Stable" (Id integer, StableName TEXT, primary key (Id)) create table "Dude" (Id integer, DudesName TEXT, primary key (Id)) create table "Pony" (Id integer, Name TEXT, Color TEXT, Stable_id INTEGER, Dude_id INTEGER, primary key (Id)) Given that I have a Pony entity in my code, I need to be able to find out: A. Does Pony even belong to a collection in the mapping? B. If it does, what are the column names in the database table that pertain to collections In the above instance, I would like to see that Pony has two collection columns, Stable_id and Dude_id.

    Read the article

  • entity framework vNext wish list

    - by Fred Yang
    I have been intensively studying and use ef4 in my project. I do feel the improvement that it has over version 1. But I found that I have something I cannot get around easily. Here is a list I want it to be better in ef vNext. the model designer should allow multiple view of the same model, so that I don't need cram all my entity into a single view. respect user's manual edit of edmx. Currently, the some database view object simply can not be imported to the model because the designer "smartly" think that the view does not have a primary key, so that I have to manually edit the edmx to correct designer's behavior. But in the next "update from database" task, designer will revert my customization. For now, I simply fallback to manually edit the edmx file at all, or I have to use compare tool to keep the new update, and rollback and put the new update into my old edmx file manually. Designer should be improved to allow default behavior and user's manual control. I want control not to let the designer refresh the change of imported object. support user defined table function. linq is about Composability, stored proc dos not support composability. I wish I could use user defined table function which support this. What are you wishes for EF vNext?

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • Moving front page (cursebird or foursquare)

    - by Dan Samuels
    Ok so I'll be honest, I have a good amount of experience with php/mysql, I've just started learning jQuery and I've done very little, but some with ajax. So using the terms ajax/jquery interchangeably are a bit confusing to me. Anyway as the title suggest I have a website with 5 items, and I want them to move (meaning, if a more recent one is entered, remove the last item, and put the new one on top), they are 5 of the most recent items in the database table, now I've coded jquery as a test so it fades out the last one, the whole thing moves down, makes room at the top, and fades in a new one. However, it's a test and has 0 interaction with the database, the one that fades in is just in a hidden div. So the jQuery part is taken care of. So I'm unsure how to go about this, I was thinking maybe have ajax check a website off the page that has those 5 items in raw format, and if they change then to refresh? Not looking for a "plz code 4 me" answer, just the concept of how it would work, or some links to get off to the right start. edit - Also, the 5 items are ranked, so if I click item 3 I need it to move above item 2 refreshlessly, so this causes a whole other issue I assume.

    Read the article

  • Wnat is the preferred method of building extremely lightweight business object / DAL now that I have

    - by Seth Spearman
    Hello, I have completed a simple database for a project. Only 6tables. Of the 6, one is a "lookup" table. There is one "master" table that is the driver for the system. It is referenced as a foreign key by the other four tables. Give that this step is completed. What is the FASTEST, EASIEST way to create POCOs/BizObjects that can load load the data and the child data. Here are my CAVEATS. *I don't want to spend more than 30-60 minutes learning how? *There is very little biz logic needed in the POCOs. They will pretty much load data. Don't even really need to write back data. *I already know CSLA (up to version 3) but I feel that is overkill for this little project. *Nevertheless, I would love it if it ROOT objects could have collection classes that contain the CHILD objects as in CSLA...but again, without using CSLA. *Please give the answer for .NET 35 but also if I was restricted to only use .NET 20. *Ideally I could just point a tool at the database and the POCOs would be genn'ed. *FREE Just curious what you guys use for this kind of scenario. I understand that this question is subjective but I want to hear a variety of answers. Seth

    Read the article

  • JPA 2.0 Provider Hibernate

    - by Rooh
    I have very strange problem we are using jpa 2.0 with hibernate annotations based Database generated through JPA DDL is true and MySQL as Database; i will provide some reference classes and then my porblem. @MappedSuperclass public abstract class Common implements serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; @ManyToOne @JoinColumn private Address address; //with all getter and setters //as well equal and hashCode } @Entity public class Parent extends Common{ private String name; @OneToMany(cascade = {CascadeType.MERGE,CascadeType.PERSIST}, mappedBy = "parent") private List<Child> child; //setters and rest of class } @Entity public class Child extends Common{ //some properties with getter/setters } @Entity public class Address implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; private String street; //rest of class with get/setter } as in code you can see that parents and child classes extends Common class so both have address property and id , the problem occurs when change the address refference in parent class it reflect same change in all child objects in list and if change address refference in child class then on merge it will change address refference of parent as well i am not able to figure out is it is problem of jpa or hibernate

    Read the article

  • Applying fine-grained security to an existing application

    - by Mark
    I've inherited a reasonably large and complex ASP.NET MVC3 web application using EF Code First on SQL Server. It uses ASP.NET Membership roles with database authentication. The controller actions are secured with attributes derived from AuthorizeAttribute that map roles to actions. There are extension methods for the finer points, such as showing a particular widget to particular roles. This is works great and I have a good understanding of the current security model. I've been asked to provide finer grained security at the data level. For example a 'Customer' user can only see data (throughout the database) associated with themselves and not other Customers. The problem is that 'Customer' is only 1 of 5 different types with their own specific restrictions (each of the 9 roles is one of these 5 types). The best thing I can think of is to go through all the data repositories and extend each and every LINQ statements/query with a filter for every user type. Even if I had time for that it doesn't seem like the most elegant way. Any suggestions? I really don't know where to start with this so anything could be helpful. Many thanks.

    Read the article

  • Multi threading in WCF RIA Services

    - by synergetic
    I use WCF RIA Services to update customer database. In domain service: public void UpdateCustomer(Customer customer) { this.ObjectContext.Customers.AttachAsModified(customer); syncCustomer(customer); } After update, a database trigger launches and depending on the columns updated it may insert a new record in CustomerChange table. syncCustomer(customer) method is executed to check for a new record in the CustomerChange table and if found it will create a text file which contains customer information and forwards that file to external system for import. Now this synchronization may take a time so I wanted to execute it in different thread. So: private void syncCustomer(Customer customer) { this.ObjectContext.SaveChanges(); new Thread(() => syncCustomerInfo(customer.CustomerID)) { IsBackground = true }.Start(); } private void syncCustomerInfo(int customerID) { //Thread.Sleep(2000); //does real job here ... ... } The problem is in most cases syncCustomerInfo method cannot find any new CustomerChange record even if it was definitely there. If I force thread sleep then it finds a new record. I also looked Entity Framework events but the only event provided by object context is SavingChanges which occur before changes are saved. Please suggest me what else to try.

    Read the article

  • C# web service, MySql encoding problem

    - by Boban
    I use C# web service to insert, delete and get data from MySql database. The problem is some of the data is in Macedonian (Cyrilic). When I insert directly in the database, it inserts ok. For example: "???" is "???". When I insert throgh the service, it's not. For example: "???" is "???". When I try to get data throug the service, it gets it ok. What's the problem with the inserting? Here is part of my code for inserting: MySqlConnection connection = new MySqlConnection(MyConString); MySqlCommand command = connection.CreateCommand(); command.CommandText = "INSERT INTO user (id_user, name VALUES (NULL, ?name);"; command.Parameters.Add("?name", MySqlDbType.VarChar).Value = name; connection.Open(); command.ExecuteReader(); connection.Close(); return thisrow; Tnq U in advance!!!

    Read the article

  • Why does a change of Session State provider lead to an ASPx page yielding garbage?

    - by Rory Becker
    I have an aspnet webapp which has worked very well up until now. I was recently asked to explore ways of making it scale better. I found that seperation of database and Webapp would help. Further I was told that if I changed my session providing mechanism to SQLServer, I would be able to duplicate the Web Stack to several machines which could each call back to the state server allowing the load to be distirbuted better. This sounds logical. So I created an ASPState database using ASPNet_RegSQL.exe as detailed in many locations across the web and changed the web.config on my app from: <sessionState mode="InProc" cookieless="false" timeout="20" /> To: <sessionState mode="SQLServer" sqlConnectionString="Server=SomeSQLServer;user=SomeUser;password=SomePassword" cookieless="false" timeout="20" /> Then I addressed my app, which presented me with its logon screen and I duly logged in. Once in I was presented, not with the page I was expecting, but with: I can change the sessionstate back and forth. This problem goes away and then comes back based on which set of configuration I use. Why is this happening?

    Read the article

  • Do you have health checks in your web app or web site?

    - by Pekka
    I have built PHP based "health check" scripts for several projects, but they were always custom-made for the occasion and not written for abstraction as an independent product. I would like to know whether such a solution exists. What I meam by "health check" is a protected web page that functions much like a suite of unit tests, but on a more operational level, showing red/yellow/green statuses for things like Are the cache directories writable? Is the PHP version correct, are required extensions installed? Is the configuration file protected from writing? Is the database server reachable? Do the key tables exist in the database? Is there enough disk space available? Is the site's front page reachable and renders fully ( = no PHP errors)? Do the project's libraries' MD5 checksums match the original ones? Do you do this - or parts of it - in your applications and web sites? Are there any standardized tools for this that bring along all the functionality to perform the tests (ideally as plugins), and just need to be configured accordingly? Is there a way to set this up using one of the Unit Testing frameworks available for PHP (preferably PHPUnit)? If so, do you know any resources / tutorials outlining how?

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

< Previous Page | 692 693 694 695 696 697 698 699 700 701 702 703  | Next Page >