Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 624/1255 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Find a control by String from asp.net Web service

    - by jphenow
    Well since it seems relatively difficult to send the object of a WebControl over JSON using jquery.ajax() I've decided to send the name of the control as a string because I know how to do that. Then I promptly realized that, from a web service, I don't actually know how to search for a control by ID name. Since its a service I can't seem to get Control.FindControl() to work so does anyone have ideas or suggestions? All I'm trying to do is call a databind() on my radcombobox. Thanks in advance! For any of you that knows anything about asp.net/rad controls - I'm basically updating a database and want the radcombobox to be in sync with that database again after adding something, before I autoselect what was just added. Other than databind do i have to call anything to refresh that list? Thanks again!

    Read the article

  • Is it a good idea to close and open hibernate sessions frequently?

    - by Gaurav
    Hi, I'm developing an application which requires that state of entities be read from a database at frequent intervals or triggers. However, once hibernate reads the state, it doesn't re-read it unless I explicitly close the session and read the entity in a new session. Is it a good idea to open a session everytime I want to read the entity and then close it afterwards? How much of an overhead does this put on the application and the database (we use a c3p0 connection pool also)? Will it be enough to simply evict the entity from the session before reading it again?

    Read the article

  • How to handle dynamic site localizations?

    - by James Simpson
    I've got a website that is currently all in english. It is an online game, so it has a bunch of different pages with static text, as well as a lot of content in a database. I am trying to expand more globally and am gearing up to release some localizations of the site. However, I'm not sure about the best way to go about setting this up so that it'll be the easiest for me to manage and the easiest for users to use as well. Should I be storing the translated texts in a database, or should this be done in a completely different way? If it matters at all, the site is written in PHP and uses MySQL.

    Read the article

  • db2 jdbc driver does not release table locks

    - by as
    situation: We have a web service running on tomcat accessing DB2 database on AS400, we are using JTOPEN drivers for JNDI connections handled by tomcat. For handling transactions and access to database we are using Spring. For each select system takes JDBC connection from JNDI (i.e. from connection pool), does selection, and in the end it closes ResultSet, Statement and releases Connection in that order. That passes fine, shared lock on table dissappears. When we want to do update the same way as we did with select (exception on ResultSet object, we don't have one in such situation), after releasing Connection to JNDI lock on table stays. If we put maxIdle=0 for number of connections in JNDI configuration, this problem disappears, but this degrades performances, we have cca 100 users online on that service, we need few connections to be alive in pool. What do you suggest?

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • Double Inner Join generates unexpected error

    - by Itamar Marom
    In my database I have three tables: Users: UserID (Auto Numbering), UserName, UserPassword and a few other unimportant fields. PrivateMessages: MessageID (Auto Numbering), SenderID and a few other fields defining the message content. MessageStatus: MessageID, ReceiverID, MessageWasRead (Boolean) What I need is a query to which I input a user's id and I get all the private messages he has received. In addition, I also need to receive each message's sender UserName. For this I wrote the following query: SELECT Users.*, PrivateMessages.*, MessageStatus.* FROM PrivateMessages INNER JOIN Users ON PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageID WHERE MessageStatus.ReceiverID=[@userid]; But for some reason when I try saving it in my Access database, I get the following error (translated to English by me, since my office is in a different language): Syntax error (missing operator) at expression: "PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageI". Any ideas what could cause this? Thanks.

    Read the article

  • Create fake subdirectories with htaccess and php AND keep existing directories as is.

    - by Arseni
    I have a website, which has numerous subdirectories already. (All existing in server's filesystem) I want to create new "virtual" sub-dirs with htaccess, but I only want the htaccess rule work for directories, listed in DB, and not existing in filesystem. i.e. File system has: /dir1/ & /dir2/ MySQL database has record for 'dir3' & 'dir4' And I want: A: mysite.com/dir1/ and mysite.com/dir2/ display existing old content B: mysite.com/dir3/ and mysite.com/dir4/ display content from MySQL provided by PHP sctipt via redirect like: mysite.com/myscript.php?dir=dir3 C: mysite.com/dir5/ display 404 error (Dir does not exist in DB nor in Database) Basically I want .htaccess to work like this: IF DIR Exists in DB - apply the rewrite rule and show content from myscript.php?dir=DIR ELSE don't apply any rule. I can create a separate php script, which can return 0/1 when given dir name exist in DB or not, but how do I make mod_rewrite get the data from that script? Is it possible with htaccess/PHP at all?

    Read the article

  • how to handle exceptions/errors in php?

    - by fayer
    when using 3rd part libraries they tend to throw exceptions to the browser and hence kill the script. eg. if im using doctrine and insert a duplicate record to the database it will throw an exception. i wonder, what is best practice for handling these exceptions. should i always do a try...catch? but doesn't that mean that i will have try...catch all over the script and for every single function/class i use? Or is it just for debugging? i don't quite get the picture. Cause if a record already exists in a database, i want to tell the user "Record already exists". And if i code a library or a function, should i always use "throw new Expcetion($message, $code)" when i want to create an error? Please shed a light on how one should create/handle exceptions/errors. Thanks

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • Storing HTML in MySQL using Java

    - by mpcabd
    Hello there again, So, I'm working on a project now where I should store webpages inside a database, I'm using crawler4j to crawl and Proxool along with MySQL Java Connector to connect to my database. When I tested the application I got: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'HTMLData'. The HTMLData column wasTEXT. When I changed the HTMLData column to LONGTEXT the error was gone, but I'm afraid it might get back in the future. Any idea on how to do that perfectly so I don't worry about that error (or any other similar error) in the future? Thanks :)

    Read the article

  • Entity Framework This property descriptor does not support the SetValue

    - by Gayan
    Hello guys, below are my entities which i have created using entity frame work. retailer id name childs(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [Name] nvarchar NOT NULL childern id name RETAILER(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [name] nvarchar NOT NULL [Retailer_Id] [int] NOT NULL, As you can see in the above model the relationship is 1 retailer can have 0 or 1 child. my problem is when i create a new child and set the retailer navigation property of it to a retailer entity it throws the following exception.how do i solve it Error while setting property 'retailer': 'This property descriptor does not support the SetValue method.'.

    Read the article

  • facebook extended access token login

    - by Pota Onasys
    I have the following code: $facebook = new Facebook(array( 'appId' => ID, 'secret' => SECRET, 'cookie' => true)); $fb_uid = $facebook->getUser(); if($fb_uid) { // check if person with fb_uid as facebook id in database, // if so log them in. If not, register them. $fb_user = $facebook->api('/' . $fb_uid); I then get their email address using $fb_user['email'] and facebook_id and store in the database as a means to log them in the future Sometimes $fb_uid returns false even though the person is logged in using facebook ... I think it is because the access token expires. how can I change this code to incorporate the extended access token to log in the user to my site? offline access token is deprecated, so I need to use the extended access token.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Best way to code a webservice in weblogic?

    - by John
    I am new to Weblogic and J2ee. I need to build a webservice that simply runs a query on the backend database (DB2 zOS) and returns the results. Being new to this I have a few questions. 1) What is the best way to build the webservice? 2) How do I connect to the database with weblogic. 3) Is there a way to cache the data returned so that the next request for the same data is pulled from cache? If googled for this but there seems to be many way to handle this. I am looking for the best way that can handle a high volume of requests. Any links to sample code would be helpful. - Thanks

    Read the article

  • JOTM, JOTM-BTP and webservice

    - by user324373
    When my request is submitted from JSP page at that time, server side i do some transaction but didn't commit yet. Now i am calling a web service and pass some data. In that web service i do some database operation but didn't commit and return some data. Now i want to commit this both transaction in one single commit line. In short i want to manage my TX at client side only. Is It possible ???? I am using MySql 5.X database , tomcat 6.X , JOTM and JOTM-BTP.

    Read the article

  • Copying files to my laptop makes them locked

    - by John
    When I save files from e.g. remote desktop or from an email (outlook) attachments, or from skype even to my local machine they show a locked Icon on the file. Then e.g. SQL Server doesn't let me restore backups as it says the operating system doesn't have access to the file. I've had success fixing this by setting the ownership of the parent folder to my user and then let it apply to sub folders. Also sometimes I need to click - Proerties - Security - Advanced - Change Permmissions, then check "change child permissions..." and apply on the parent dir. I'm using Windows 7 64 bit Proffessional, on HP Probook 4530, and I have a administrator user. This is a real pain to do everytime. I suspect it might be because of HP software that came with the laptop, I think there is drive encryption as part of the protect tools. Although I'm hoping there's something in windows i can set to change the behaviour to not lock these files.

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Read Call History from iPhone on iOS 5 and above

    - by Sandeep Dhama
    I am developing an application and need to read the users call history from iPhone.I have go though all the forums and google it and found that we can get the records from teh callHistory sqlite database by "private/var/root/Library/CallHistory/call_history.db". But this path is not working from iOS 5 and above. Seems like apple has changed all their database path and structure so that no one can accees it. I have also seen some of the application on iTunes who are capable of getting the users call history like:-https://itunes.apple.com/us/app/callog/id327883585?mt=8 I have also check a mac desktop utility called "WonderShare Dr.Fone" which will fetch all the data from your iPhone like call history messages, notes , etc.How this utility is fetching the call history records and other details? If their is any API or private APIs by which i can get the callHistory data path please let me know.

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • LINQ in SQLite for Windows store app does not have 'ThenBy' to order by multiple columns

    - by user1131657
    I have a Windows 8 store application and I'm using the latest version on SQLite for my database. So I want to return some records from the database and I want to order them by more that one column. However SQLite doesn't seem to have the ThenBy statement? So my LINQ statement is below: from i in connection.Table<MyTable>() where i.Type == type orderby i.Usage_Counter // ThenBy i.ID select i); So how do I sort by multiple columns in SQLite without doing another LINQ statement?

    Read the article

  • LINQ saving images to varbinary

    - by m4rc
    I'm having issues saving images to a varbinary(Max) field using LINQ. I can save files in the region of 10KB to the database no problems, but when it comes to files bigger than that, it's as though it doesn't even try. I've had a look in the SQL Server Profiler and when the file is around 10KB I can see the full insert statement in the detail pane. However, when the file is a bit bigger, the detail pane doesn't show anything, although any data besides the varbinary field is written to the database. The data is in the Data Object just before SubmitChanges so I can't figure out what's happening between now and then!

    Read the article

  • Django Redundancy

    - by Sunsu
    I've read many things about scaling Django and the new multiple-DB support makes it so much easier. However, I have not been able to find much information on good ways to create a fully redundant system (not just one that scales). I realize there are many things that go into this problem, but the real thing I'm having trouble solving well is Database redundancy. Is it possible to set up a "write slave" using django's new multiple-DB support? If I had IP failover support it seems like having a write slave would help solve the problem. Simple MySQL replication doesn't seem like it will work due to slave lag right? What's the typical method of creating a redundant database system? Any input or guidance you guys have would be greatly appreciated. I realize I could be asking the wrong questions!

    Read the article

  • Can Grails exceptionHandler support the following Error Handling Flow

    - by Andrew
    In my rails app that I am porting to grails whenever an unexpected error occurs I intercept the error automatically and display a form to the user informing them that an error has occured and asking them for further information. Meanwhile, as the form is rendered I write the stack trace and other information about who was logged in to a database table. Then if the form is submitted I add that information to the error report. I cannot tell from the exceptionHandler documentation and BootStrap examples whether that will allow me to grab all the information including various session and request parameters and then stuff them into a database and then post a form. Any thoughts?

    Read the article

  • How do I submit really big amounts of data to a form?

    - by William Calleja
    I have an HTML from that's posting a really big amount of data which is eventually being saved into an SQL Server 2005, the form is as follows: <form name="frmForm" method="post" action="saveData.aspx"> the target page takes the content of a control within the form and saves it to the database through a normal SQL insert statement. However only a portion of the data is being saved. The field in the database is an ntext. Should I use a different field? Or is something happening while I'm transferring from one page to another? Or even still there's something happening when I'm sending the really big SQL statement through c# in saveData.aspx?

    Read the article

  • Simplest way to extend doctrine for MVC Models

    - by RobertPitt
    Im developing my own framework that uses namespaces. Doctrine is already integrated into my auto loading system and im now at the stage where ill be creating the model system for my application Usually i would create a simple model like so: namespace Application\Models; class Users extends \Framework\Models\Database{} which would inherit all the default database model methods, But with Doctrine im still learning how it all works, as its not just a simple DBAL. I need to understand whats the part of doctrine my classes would extend where i can do the following: namespace Application\Models; class Users Extends Doctrine\Something\Table { public $__table_name = "users"; } And thus within the controller i would be able to do the following: public function Display($uid) { $User = $this->Model->Users->findOne(array("id" => (int)$id)); } Anyone help me get my head around this ?

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >