Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 747/1300 | < Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >

  • Java/Hibernate using interfaces over the entities.

    - by Dennetik
    I am using annoted Hibernate, and I'm wondering whether the following is possible. I have to set up a series of interfaces representing the objects that can be persisted, and an interface for the main database class containing several operations for persisting these objects (... an API for the database). Below that, I have to implement these interfaces, and persist them with Hibernate. So I'll have, for example: public interface Data { public String getSomeString(); public void setSomeString(String someString); } @Entity public class HbnData implements Data, Serializable { @Column(name = "some_string") private String someString; public String getSomeString() { return this.someString; } public void setSomeString(String someString) { this.someString = someString; } } Now, this works fine, sort of. The trouble comes when I want nested entities. The interface of what I'd want is easy enough: public interface HasData { public Data getSomeData(); public void setSomeData(Data someData); } But when I implement the class, I can follow the interface, as below, and get an error from Hibernate saying it doesn't know the class "Data". @Entity public class HbnHasData implements HasData, Serializable { @OneToOne(cascade = CascadeType.ALL) private Data someData; public Data getSomeData() { return this.someData; } public void setSomeData(Data someData) { this.someData = someData; } } The simple change would be to change the type from "Data" to "HbnData", but that would obviously break the interface implementation, and thus make the abstraction impossible. Can anyone explain to me how to implement this in a way that it will work with Hibernate?

    Read the article

  • Unserializing an API return object (PHP/Ebay API)

    - by DavidYell
    I have been working with the Ebay api for a project and have found it great. I have however found a problem now, more PHP related. When I read my items from Ebay, I store a bunch of details in the database. Currently, just for the sake of it really, I serialize the whole return object and store it in the database in a related table. The idea being, that when I display my information, I have all the details to hand should I need them. The problem arises in that the pricing information is always in a sub object. [ConvertedAdjustmentAmount] => __PHP_Incomplete_Class Object ( [__PHP_Incomplete_Class_Name] => eBayAmountType [_] => 0 [currencyID] => USD ) As you can see when I unserialize my object, my cunning plan falls foul of the Incomplete class problem. I have checked the following question, without success. http://stackoverflow.com/questions/965611/forcing-access-to-php-incomplete-class-object-properties The main issue lies, as far as I can see, in that the price class is stored in the Ebay api, so how do I recreate it? I have been reading this page, http://uk3.php.net/manual/en/function.unserialize.php and trying to figure out, unserialize_callback_func which I can't figure out either, so any help would be appreciated!

    Read the article

  • Creating PHP strings using other variables...works manually, can't figure out how automatically

    - by Matt
    Hello, I'm trying to get a variable to be formed automatically using data pulled from a mysql database. I know the data is being pulled from the database in some form resembling its original form, but that data does not act the same as data that is manually typed and assigned to a string. For example, if a cell in a mysql table says... I said "goodbye" before I left. She also said "goodbye." ...and I manually copy/paste it and add the necessary escapes... $string1 = " I said \"goodbye\" before I left. She also said \"goodbye.\" "; ...that does not equal... $string1 = $mysqlResultArray['specificCellWithQuoteShownAbove'] Interestingly, if I echo both versions of $string1 and view the output, they appear to be exactly the same. But they do not function the same when put through various functions I've created. The functions only work if I do the manual copy/paste method--which is not what I want, obviously. I'm not sure if it has to do with the line breaks or the escapes--or some combination of the two. But while both strings are superficially the same, they are apparently functionally different and I don't know why. So how can I create $string1 without manually copy/pasting the contents from the mysql entry and instead querying for the data and assigning it to $string1 in such a way that it's exactly functionally equivalent as the manual copy/pasted string?

    Read the article

  • How to create C++ istringstream from a char array with null(0) characters?

    - by Morpheus
    I have a char array which contains null characters at random locations. I tried to create an iStringStream using this array (encodedData_arr) as below, I use this iStringStream to insert binary data(imagedata of Iplimage) to a MySQL database blob field(using MySQL Connector/C++'s setBlob(istream *is) ) it only stores the characters upto the first null character. Is there a way to create an iStringStream using a char array with null characters? unsigned char *encodedData_arr = new unsigned char[data_vector_uchar->size()]; // Assign the data of vector<unsigned char> to the encodedData_arr for (int i = 0; i < vec_size; ++i) { cout<< data_vector_uchar->at(i)<< " : "<< encodedData_arr[i]<<endl; } // Here the content of the encodedData_arr is same as the data_vector_uchar // So char array is initializing fine. istream *is = new istringstream((char*)encodedData_arr, istringstream::in || istringstream::binary); prepStmt_insertImage->setBlob(1, is); // Here only part of the data is stored in the database blob field (Upto the first null character)

    Read the article

  • Gridview looses ItemTemplate after columns are removed

    - by Middletone
    I'm trying to bind a datatable to a gridview where I've removed some of the autogenerated columns in the code behind. I've got two template columns and it seems that when I alter the gridview in code behind and remove the non-templated columns that the templates loose the controls that are in them. Using the following as a sample, "Header A" will continue to be visible but "Header B" will dissapear after removing any columsn that are located at index 2 and above. I'm creating columns in my codebehind for the grid as a part of a reporting tool. If I don't remove the columns then there doesn't seem to be an issue. <asp:GridView ID="DataGrid1" runat="server" AutoGenerateColumns="false" AllowPaging="True" PageSize="10" GridLines="Horizontal"> <Columns> <asp:TemplateField HeaderText="Header A" > <ItemTemplate > Text A </ItemTemplate> </asp:TemplateField> <asp:TemplateField> <HeaderTemplate> Header B </HeaderTemplate> <ItemTemplate> Text B </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> For i = 2 To DataGrid1.Columns.Count - 1 DataGrid1.Columns.RemoveAt(2) Next EDIT So from what I've read this seems to be a problem that occurs when the grid is altered. Does anyone know of a good workaround to re-initialize the template columns or set them up again so that when the non-template columns are removed that hte templates don't get removed as well?

    Read the article

  • N Tiers with SubSonic 3, Dirty Columns collection is alwayes empty on update

    - by Adel
    Hello guys here is what i am doing, and not working for me. I have a DAL generated with SubSonic 3 ActiveRecord template, i have a service layer (business layer if you well) that have mixture of facade and some validation. say i have a method on the Service layer like public void UpdateClient(Client client); in my GUI i create a Client object fill it with some data with ID and pass it to the service method and this never worked, the dirty columns collection (that track which columns are altered in order to use more efficant update statment) is alwayes empty. if i tried to get the object from database inside my GUI then pass it to the service method it's not working either. the only scenario i find working is if i query the object from the database and call Update() on the same context all inside my GUI and this defeats the whole service layer i've created. however for insert and delete everything working fine, i wonder if this have to do something with object tracking but what i know is SubSonic don't do that. please advice. thanks. Adel.

    Read the article

  • Help dealing with data dependency between two registration forms

    - by franko75
    I have a tricky issue here with a registration of both a user and his/her pet. Both the user and the pet are treated as separate entities and both require separate registration forms. However, the user's pet has to be linked to the user via a foreign key in the database. The process is basically that when a new user joins the site, firstly they register their pet, then they register themselves. The reason for this order is to check their pet's eligibility for the site (there are some criteria to be met) first, instead of getting the user to sign up only to then find out their pet is ineligible. It is this ordering of the form submissions which is causing me a bit of a headache, as follows... The site is being developed with an MVC framework, and the User registration process is managed via a method in a User_form controller, while the pet registration process is managed via a method in the Pet_form controller. The pet registration form happens first, and the pet data can be saved without the owner_id at this stage, with the user id possibly being added (e.g by retrieving pet's id from session) following user registration. However, doing it this way could potentially result in redundant data, where pet records would be created in the database, but if the user doesn't actually register themselves too, then the pets will be ownerless records in the DB. Other option is to serialize the new pet's data at the pet registration stage, don't save it to the DB until the user fills out their registration form. Once the user is created, i can pass serialised data AND the owner_id to a method in the Pet Model which can update the DB. However, I also need to set the newly created $pet to $this-pet which I then access for a sequence of other related forms. Should I just set the session variable in the model method? Then in the Pet controller constructor, do a check for pet stored in session, if yes, assign to $this-pet... If this makes any sense to anybody and you have some advice, i'd be grateful to hear it!

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Is something along the lines of nested memoization needed here?

    - by Daniel
    System.Transactions notoriously escalates transactions involving multiple connections to the same database to the DTC. The module and helper class, ConnectionContext, below are meant to prevent this by ensuring multiple connection requests for the same database return the same connection object. This is, in some sense, memoization, although there are multiple things being memoized and the second is dependent on the first. Is there some way to hide the synchronization and/or mutable state (perhaps using memoization) in this module, or perhaps rewrite it in a more functional style? (It may be worth nothing that there's no locking when getting the connection by connection string because Transaction.Current is ThreadStatic.) type ConnectionContext(connection:IDbConnection, ownsConnection) = member x.Connection = connection member x.OwnsConnection = ownsConnection interface IDisposable with member x.Dispose() = if ownsConnection then connection.Dispose() module ConnectionManager = let private _connections = new Dictionary<string, Dictionary<string, IDbConnection>>() let private getTid (t:Transaction) = t.TransactionInformation.LocalIdentifier let private removeConnection tid = let cl = _connections.[tid] for (KeyValue(_, con)) in cl do con.Close() lock _connections (fun () -> _connections.Remove(tid) |> ignore) let getConnection connectionString (openConnection:(unit -> IDbConnection)) = match Transaction.Current with | null -> new ConnectionContext(openConnection(), true) | current -> let tid = getTid current // get connections for the current transaction let connections = match _connections.TryGetValue(tid) with | true, cl -> cl | false, _ -> let cl = Dictionary<_,_>() lock _connections (fun () -> _connections.Add(tid, cl)) cl // find connection for this connection string let connection = match connections.TryGetValue(connectionString) with | true, con -> con | false, _ -> let initial = (connections.Count = 0) let con = openConnection() connections.Add(connectionString, con) // if this is the first connection for this transaction, register connections for cleanup if initial then current.TransactionCompleted.Add (fun args -> let id = getTid args.Transaction removeConnection id) con new ConnectionContext(connection, false)

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • Qt/C++ Error handling

    - by ShiGon
    I've been doing a lot of research about handling errors with Qt/C++ and I'm still as lost as when I started. Maybe I'm looking for an easy way out (like other languages provide). One, in particular, provides for an unhandled exception which I use religiously. When the program encounters a problem, it throws the unhandled exception so that I can create my own error report. That report gets sent from my customers machine to a server online which I then read later. The problem that I'm having with C++ is that any error handling that's done has to be thought of BEFORE hand (think try/catch or massive conditionals). In my experience, problems in code are not thought of before hand else there wouldn't be a problem to begin with. Writing a cross-platform application without a cross-platform error handling/reporting/trace mechanism is a little scary to me. My question is: Is there any kind of Qt or C++ Specific "catch-all" error trapping mechanism that I can use in my application so that, if something does go wrong I can, at least, write a report before it crashes?

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • Check if date is allowed weekday in php?

    - by moogeek
    Hello! I'm stuck with a problem how to check if a specific date is within allowed weekdays array in php. For example, function dateIsAllowedWeekday($_date,$_allowed) { if ((isDate($_date)) && (($_allowed!="null") && ($_allowed!=null))){ $allowed_weekdays=json_decode($_allowed); $weekdays=array(); foreach($allowed_weekdays as $wd){ $weekday=date("l",date("w",strtotime($wd))); array_push($weekdays,$weekday); } if(in_array(date("l",strtotime($_date)),$weekdays)){return TRUE;} else {return FALSE;} } else {return FALSE;} } ///////////////////////////// $date="21.05.2010"; $wd="[0,1,2]"; if(dateIsAllowedWeekday($date,$wd)){echo "$date is within $wd weekday values!";} else{echo "$date isn't within $wd weekday values!"} I have input dates formatted as "d.m.Y" and an array returned from database with weekday numbers (formatted as 'Numeric representation of the day of the week') like [0,1,2] - (Sunday,Monday,Tuesday). The returned string from database can be "null", so i check it too. Them, the isDate function checks whether date is a date and it is ok. I want to check if my date, for example 21.05.2010 is an allowed weekday in this array. My function always returns TRUE and somehow weekday is always 'Thursday' and i don't know why... Is there any other ways to check this or what can be my error in the code above? thx

    Read the article

  • does entity framework or mysql provider swallows timeout exceptions on enumeration of result?!

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when it takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior?

    Read the article

  • Access denied from another thread

    - by Lobuno
    Hello! In a program I span a thread ("the working thread"). Hera I copy some files write some data to a database and eventually, delete some other files or directories. Everything works fine. The problem is now, that I decided to move the deleting operation to some other thread. So the working thread now copies the files or directories, writes to the database, and , if there is a need to delete some other files this thread spans another thread and that second thread deleted the needed files or directories. The problem is that,the deletion used to work 100% when done in the working thread, now when the same is done in the secondary thread, I sometimes get an "Access denied" error and the files cannot be deleted. And no, the working thread is definitely NOT acceding the files and directories to delete at this moment. Sometimes (but not always) the main thread is impersonating some user, so if needed , the deleting thread is also running under impersonation just to grant the needed permissions to delete the files, so that should not be the problem. Anybody has a clue why this could be happening?

    Read the article

  • Magento help with creating model in custom class

    - by dardub
    I'm trying to add some custom filter methods to my magento module. I though it would be simple but I can't figure out why it's not working. My model which extends the catalog/product class contains this: public function filterProdType($prod_id) { $this->addAttributeToFilter('attribute_set_id', $prod_id); } Then in my template I have this: $collection = Mage::getModel('configurator/product')->getCollection()->addAttributeToSelect('*'); $collection->filterProdType(50)->addAttributeToFilter('type_id', 'bundle'); foreach ($collection as $item){ echo $item->getName() . ', '; } Just to try things out. But I don't get any results, no error, and it doesn't finish rendering the page (missing footer). When I do this instead, it works: $collection = Mage::getModel('configurator/product')->getCollection()->addAttributeToSelect('*'); $collection->addAttributeToFilter('attribute_set_id', 50)->addAttributeToFilter('type_id', 'bundle'); foreach ($collection as $item){ echo $item->getName() . ', '; } I'm just wondering what I'm missing. UPDATE: I didn't realize error reporting was turned off, after turning it on I get the error: Fatal error: Call to undefined method Mage_Catalog_Model_Resource_Eav_Mysql4_Product_Collection::filterProdType() I assumed after instantiating $collection using my custom model, it would find my new method.

    Read the article

  • Any suggestions to improve my PDO connection class?

    - by Scarface
    Hey guys I am pretty new to pdo so I basically just put together a simple connection class using information out of the introductory book I was reading but is this connection efficient? If anyone has any informative suggestions, I would really appreciate it. class PDOConnectionFactory{ public $con = null; // swich database? public $dbType = "mysql"; // connection parameters public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } }

    Read the article

  • codeigniter mulitple LIKE db query using associative array- but all from the same column name...?

    - by Inigo
    Hi, I'm trying to query my database using codeigniter's active record class. I have a number of blog posts stored in a table. The query is for a search function, which will pull out all the posts that have certain categories assigned to them. So the 'category' column of the table will have a list of all the categories for that post in no particular order, separated by commas, like so: Politics,History,Sociology.. etc. If a user selects, say, Politics and History, The titles of all the posts that have BOTH these categories should be returned. Right? So, the list of categories queried will be the array $cats. I thought this would work- foreach ($cats as $cat){ $this->db->like('categories',$cat); } By Producing this- $this-db-like('categories','Politics'); $this-db-like('categories','History'); (Which would produce- 'WHERE categories LIKE '%Politics%' AND categories LIKE '%History%') But it doesn't work, it seems to only produce the first statement. The problem I guess is that the column name is the same for each of the chained queries. There doesn't seem to be anything in the CI user guide about this (http://codeigniter.com/user_guide/database/active_record.html) as they seem to assume that each chained statement is going to be for a different column name. Does anyone know how I could do this? Thanks! edit- Of course it is not possible to use an associative array in one statement as it would have to contain duplicate keys- in this case every key would have to be 'categories'...

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • output query in strict table formate in code-igniter

    - by riad
    Dear all, my code is below.it show the output in table format having no problems. But when the particular tr gets long output from database then the table break. Now how can i fixed the tr width strictly?let say i want each td cannot be more than 100px. How can i do it? Note: Here table means html table,not the database table. if ($query->num_rows() > 0) { $output = ''; foreach ($query-result() as $function_info) { if ($description) { $output .= ''.$function_info-songName.''; $output .= ''.$function_info-albumName.''; $output .= ''.$function_info-artistName.''; $output .= ''.$function_info-Code1.''; $output .= ''.$function_info-Code2.''; $output .= ''.$function_info-Code3.''; $output .= ''.$function_info-Code4.''; $output .= ''.$function_info-Code5.''; } else { $output .= ''.$function_info-songName.''; } } $output .= ''; return $output; } else { return 'Result not found.'; } thanks riad

    Read the article

  • How to force client browser to download images from server rather using its cache

    - by anonim.developer
    Assume a simple aspx data entry page in which admin user can upload an image as well as some other data. They are stored in database and the next time admin visits that page to edit record, image data fetched and a preview generated and saved to disk (using GDI+) and the preview is shown in an image control. This procedure works fine for the first time however if the image changes (a new one uploaded) the next time the page is surfed it shows previously uploaded image. I debugged the application and everything works correct. The new image data is in database and new preview is stored in Temp location however the page shows previous one. If I refresh the page it shows the new image preview. I should mention that preview is always saved to disk with one name (id of each record as the name). I think that is because of IE and other browsers use client cache instead of loading images each time a page is surfed. I wonder if there is a way to force the client browser to refresh itself so the newly uploaded image is shown without user intervention. Thanks and appreciation in advance,

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • How to write to a Text File in Pipe delimited format from MS Sql Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a MS Sql Database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

< Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >